00:00:00.000 Started by upstream project "autotest-per-patch" build number 127124 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.050 The recommended git tool is: git 00:00:00.050 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.075 Fetching changes from the remote Git repository 00:00:00.076 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.111 Using shallow fetch with depth 1 00:00:00.111 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.112 > git --version # timeout=10 00:00:00.141 > git --version # 'git version 2.39.2' 00:00:00.141 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.173 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.173 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.623 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.634 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.644 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.644 > git config core.sparsecheckout # timeout=10 00:00:04.654 > git read-tree -mu HEAD # timeout=10 00:00:04.668 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:04.686 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:04.686 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.787 [Pipeline] Start of Pipeline 00:00:04.801 [Pipeline] library 00:00:04.802 Loading library shm_lib@master 00:00:04.802 Library shm_lib@master is cached. Copying from home. 00:00:04.814 [Pipeline] node 00:00:04.831 Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu22-vg-autotest_3 00:00:04.833 [Pipeline] { 00:00:04.843 [Pipeline] catchError 00:00:04.844 [Pipeline] { 00:00:04.856 [Pipeline] wrap 00:00:04.862 [Pipeline] { 00:00:04.867 [Pipeline] stage 00:00:04.869 [Pipeline] { (Prologue) 00:00:04.883 [Pipeline] echo 00:00:04.884 Node: VM-host-SM4 00:00:04.888 [Pipeline] cleanWs 00:00:04.896 [WS-CLEANUP] Deleting project workspace... 00:00:04.896 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.901 [WS-CLEANUP] done 00:00:05.057 [Pipeline] setCustomBuildProperty 00:00:05.118 [Pipeline] httpRequest 00:00:05.135 [Pipeline] echo 00:00:05.137 Sorcerer 10.211.164.101 is alive 00:00:05.143 [Pipeline] httpRequest 00:00:05.146 HttpMethod: GET 00:00:05.146 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:05.147 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:05.157 Response Code: HTTP/1.1 200 OK 00:00:05.158 Success: Status code 200 is in the accepted range: 200,404 00:00:05.158 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_3/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.473 [Pipeline] sh 00:00:08.757 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.774 [Pipeline] httpRequest 00:00:08.799 [Pipeline] echo 00:00:08.802 Sorcerer 10.211.164.101 is alive 00:00:08.811 [Pipeline] httpRequest 00:00:08.816 HttpMethod: GET 00:00:08.817 URL: http://10.211.164.101/packages/spdk_6e4acbb0d34b67f85249e07ba4f6aa85a7d2eb3c.tar.gz 00:00:08.817 Sending request to url: http://10.211.164.101/packages/spdk_6e4acbb0d34b67f85249e07ba4f6aa85a7d2eb3c.tar.gz 00:00:08.837 Response Code: HTTP/1.1 200 OK 00:00:08.838 Success: Status code 200 is in the accepted range: 200,404 00:00:08.838 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_3/spdk_6e4acbb0d34b67f85249e07ba4f6aa85a7d2eb3c.tar.gz 00:00:59.606 [Pipeline] sh 00:00:59.889 + tar --no-same-owner -xf spdk_6e4acbb0d34b67f85249e07ba4f6aa85a7d2eb3c.tar.gz 00:01:02.436 [Pipeline] sh 00:01:02.782 + git -C spdk log --oneline -n5 00:01:02.782 6e4acbb0d nvmf: update mDNS PRR listener when discovery listener changes 00:01:02.782 9cec127d3 nvmf: add nvmf_update_mdns_prr 00:01:02.782 6a5b193c0 nvmf: consolidate checking the mDNS server running status in nvmf_tgt_is_mdns_running 00:01:02.782 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:02.782 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:01:02.802 [Pipeline] writeFile 00:01:02.814 [Pipeline] sh 00:01:03.089 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:03.101 [Pipeline] sh 00:01:03.383 + cat autorun-spdk.conf 00:01:03.383 SPDK_TEST_UNITTEST=1 00:01:03.383 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.383 SPDK_TEST_NVME=1 00:01:03.383 SPDK_TEST_BLOCKDEV=1 00:01:03.383 SPDK_RUN_ASAN=1 00:01:03.383 SPDK_RUN_UBSAN=1 00:01:03.383 SPDK_TEST_RAID5=1 00:01:03.383 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:03.389 RUN_NIGHTLY=0 00:01:03.391 [Pipeline] } 00:01:03.408 [Pipeline] // stage 00:01:03.426 [Pipeline] stage 00:01:03.429 [Pipeline] { (Run VM) 00:01:03.444 [Pipeline] sh 00:01:03.727 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:03.727 + echo 'Start stage prepare_nvme.sh' 00:01:03.727 Start stage prepare_nvme.sh 00:01:03.727 + [[ -n 9 ]] 00:01:03.727 + disk_prefix=ex9 00:01:03.727 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest_3 ]] 00:01:03.727 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest_3/autorun-spdk.conf ]] 00:01:03.727 + source /var/jenkins/workspace/ubuntu22-vg-autotest_3/autorun-spdk.conf 00:01:03.727 ++ SPDK_TEST_UNITTEST=1 00:01:03.727 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.727 ++ SPDK_TEST_NVME=1 00:01:03.727 ++ SPDK_TEST_BLOCKDEV=1 00:01:03.727 ++ SPDK_RUN_ASAN=1 00:01:03.727 ++ SPDK_RUN_UBSAN=1 00:01:03.727 ++ SPDK_TEST_RAID5=1 00:01:03.727 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:03.727 ++ RUN_NIGHTLY=0 00:01:03.727 + cd /var/jenkins/workspace/ubuntu22-vg-autotest_3 00:01:03.727 + nvme_files=() 00:01:03.727 + declare -A nvme_files 00:01:03.727 + backend_dir=/var/lib/libvirt/images/backends 00:01:03.727 + nvme_files['nvme.img']=5G 00:01:03.727 + nvme_files['nvme-cmb.img']=5G 00:01:03.727 + nvme_files['nvme-multi0.img']=4G 00:01:03.727 + nvme_files['nvme-multi1.img']=4G 00:01:03.727 + nvme_files['nvme-multi2.img']=4G 00:01:03.727 + nvme_files['nvme-openstack.img']=8G 00:01:03.727 + nvme_files['nvme-zns.img']=5G 00:01:03.727 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:03.727 + (( SPDK_TEST_FTL == 1 )) 00:01:03.727 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:03.727 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:03.727 + for nvme in "${!nvme_files[@]}" 00:01:03.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:01:03.727 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:03.727 + for nvme in "${!nvme_files[@]}" 00:01:03.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:01:03.727 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.727 + for nvme in "${!nvme_files[@]}" 00:01:03.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:01:03.986 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:03.986 + for nvme in "${!nvme_files[@]}" 00:01:03.986 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:01:03.986 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.986 + for nvme in "${!nvme_files[@]}" 00:01:03.986 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:01:04.244 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.244 + for nvme in "${!nvme_files[@]}" 00:01:04.244 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:01:04.244 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.244 + for nvme in "${!nvme_files[@]}" 00:01:04.244 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:01:04.503 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.503 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:01:04.503 + echo 'End stage prepare_nvme.sh' 00:01:04.503 End stage prepare_nvme.sh 00:01:04.514 [Pipeline] sh 00:01:04.796 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:04.797 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme.img -H -a -v -f ubuntu2204 00:01:04.797 00:01:04.797 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_3/spdk/scripts/vagrant 00:01:04.797 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_3/spdk 00:01:04.797 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest_3 00:01:04.797 HELP=0 00:01:04.797 DRY_RUN=0 00:01:04.797 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme.img, 00:01:04.797 NVME_DISKS_TYPE=nvme, 00:01:04.797 NVME_AUTO_CREATE=0 00:01:04.797 NVME_DISKS_NAMESPACES=, 00:01:04.797 NVME_CMB=, 00:01:04.797 NVME_PMR=, 00:01:04.797 NVME_ZNS=, 00:01:04.797 NVME_MS=, 00:01:04.797 NVME_FDP=, 00:01:04.797 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:04.797 SPDK_VAGRANT_VMCPU=10 00:01:04.797 SPDK_VAGRANT_VMRAM=12288 00:01:04.797 SPDK_VAGRANT_PROVIDER=libvirt 00:01:04.797 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:04.797 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:04.797 SPDK_OPENSTACK_NETWORK=0 00:01:04.797 VAGRANT_PACKAGE_BOX=0 00:01:04.797 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:01:04.797 FORCE_DISTRO=true 00:01:04.797 VAGRANT_BOX_VERSION= 00:01:04.797 EXTRA_VAGRANTFILES= 00:01:04.797 NIC_MODEL=e1000 00:01:04.797 00:01:04.797 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt' 00:01:04.797 /var/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest_3 00:01:07.358 Bringing machine 'default' up with 'libvirt' provider... 00:01:07.925 ==> default: Creating image (snapshot of base box volume). 00:01:08.182 ==> default: Creating domain with the following settings... 00:01:08.182 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1721867250_9d3716bdb0c34eb17b51 00:01:08.182 ==> default: -- Domain type: kvm 00:01:08.182 ==> default: -- Cpus: 10 00:01:08.182 ==> default: -- Feature: acpi 00:01:08.182 ==> default: -- Feature: apic 00:01:08.182 ==> default: -- Feature: pae 00:01:08.182 ==> default: -- Memory: 12288M 00:01:08.182 ==> default: -- Memory Backing: hugepages: 00:01:08.182 ==> default: -- Management MAC: 00:01:08.182 ==> default: -- Loader: 00:01:08.182 ==> default: -- Nvram: 00:01:08.182 ==> default: -- Base box: spdk/ubuntu2204 00:01:08.182 ==> default: -- Storage pool: default 00:01:08.182 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1721867250_9d3716bdb0c34eb17b51.img (20G) 00:01:08.182 ==> default: -- Volume Cache: default 00:01:08.182 ==> default: -- Kernel: 00:01:08.182 ==> default: -- Initrd: 00:01:08.182 ==> default: -- Graphics Type: vnc 00:01:08.182 ==> default: -- Graphics Port: -1 00:01:08.182 ==> default: -- Graphics IP: 127.0.0.1 00:01:08.182 ==> default: -- Graphics Password: Not defined 00:01:08.182 ==> default: -- Video Type: cirrus 00:01:08.182 ==> default: -- Video VRAM: 9216 00:01:08.182 ==> default: -- Sound Type: 00:01:08.182 ==> default: -- Keymap: en-us 00:01:08.182 ==> default: -- TPM Path: 00:01:08.182 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:08.182 ==> default: -- Command line args: 00:01:08.182 ==> default: -> value=-device, 00:01:08.183 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:08.183 ==> default: -> value=-drive, 00:01:08.183 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-0-drive0, 00:01:08.183 ==> default: -> value=-device, 00:01:08.183 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.183 ==> default: Creating shared folders metadata... 00:01:08.183 ==> default: Starting domain. 00:01:10.086 ==> default: Waiting for domain to get an IP address... 00:01:22.308 ==> default: Waiting for SSH to become available... 00:01:22.308 ==> default: Configuring and enabling network interfaces... 00:01:27.576 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:32.907 ==> default: Mounting SSHFS shared folder... 00:01:33.843 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:01:33.843 ==> default: Checking Mount.. 00:01:34.410 ==> default: Folder Successfully Mounted! 00:01:34.410 ==> default: Running provisioner: file... 00:01:34.978 default: ~/.gitconfig => .gitconfig 00:01:35.237 00:01:35.237 SUCCESS! 00:01:35.237 00:01:35.237 cd to /var/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:01:35.237 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:35.237 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt" to destroy all trace of vm. 00:01:35.237 00:01:35.245 [Pipeline] } 00:01:35.263 [Pipeline] // stage 00:01:35.271 [Pipeline] dir 00:01:35.272 Running in /var/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt 00:01:35.274 [Pipeline] { 00:01:35.287 [Pipeline] catchError 00:01:35.289 [Pipeline] { 00:01:35.303 [Pipeline] sh 00:01:35.583 + vagrant ssh-config --host vagrant 00:01:35.583 + sed -ne /^Host/,$p 00:01:35.583 + tee ssh_conf 00:01:38.868 Host vagrant 00:01:38.868 HostName 192.168.121.156 00:01:38.868 User vagrant 00:01:38.868 Port 22 00:01:38.868 UserKnownHostsFile /dev/null 00:01:38.868 StrictHostKeyChecking no 00:01:38.868 PasswordAuthentication no 00:01:38.869 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:01:38.869 IdentitiesOnly yes 00:01:38.869 LogLevel FATAL 00:01:38.869 ForwardAgent yes 00:01:38.869 ForwardX11 yes 00:01:38.869 00:01:38.881 [Pipeline] withEnv 00:01:38.883 [Pipeline] { 00:01:38.896 [Pipeline] sh 00:01:39.173 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:39.173 source /etc/os-release 00:01:39.173 [[ -e /image.version ]] && img=$(< /image.version) 00:01:39.173 # Minimal, systemd-like check. 00:01:39.173 if [[ -e /.dockerenv ]]; then 00:01:39.173 # Clear garbage from the node's name: 00:01:39.173 # agt-er_autotest_547-896 -> autotest_547-896 00:01:39.173 # $HOSTNAME is the actual container id 00:01:39.173 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:39.173 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:39.173 # We can assume this is a mount from a host where container is running, 00:01:39.173 # so fetch its hostname to easily identify the target swarm worker. 00:01:39.173 container="$(< /etc/hostname) ($agent)" 00:01:39.173 else 00:01:39.173 # Fallback 00:01:39.173 container=$agent 00:01:39.174 fi 00:01:39.174 fi 00:01:39.174 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:39.174 00:01:39.443 [Pipeline] } 00:01:39.460 [Pipeline] // withEnv 00:01:39.468 [Pipeline] setCustomBuildProperty 00:01:39.482 [Pipeline] stage 00:01:39.485 [Pipeline] { (Tests) 00:01:39.504 [Pipeline] sh 00:01:39.783 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:40.054 [Pipeline] sh 00:01:40.334 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:40.608 [Pipeline] timeout 00:01:40.609 Timeout set to expire in 1 hr 30 min 00:01:40.611 [Pipeline] { 00:01:40.626 [Pipeline] sh 00:01:40.905 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:41.472 HEAD is now at 6e4acbb0d nvmf: update mDNS PRR listener when discovery listener changes 00:01:41.484 [Pipeline] sh 00:01:41.765 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:42.038 [Pipeline] sh 00:01:42.320 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:42.595 [Pipeline] sh 00:01:42.876 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:01:43.135 ++ readlink -f spdk_repo 00:01:43.135 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:43.135 + [[ -n /home/vagrant/spdk_repo ]] 00:01:43.135 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:43.135 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:43.135 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:43.135 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:43.135 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:43.135 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:01:43.135 + cd /home/vagrant/spdk_repo 00:01:43.135 + source /etc/os-release 00:01:43.135 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:01:43.135 ++ NAME=Ubuntu 00:01:43.135 ++ VERSION_ID=22.04 00:01:43.135 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:01:43.135 ++ VERSION_CODENAME=jammy 00:01:43.135 ++ ID=ubuntu 00:01:43.135 ++ ID_LIKE=debian 00:01:43.135 ++ HOME_URL=https://www.ubuntu.com/ 00:01:43.135 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:43.135 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:43.135 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:43.135 ++ UBUNTU_CODENAME=jammy 00:01:43.135 + uname -a 00:01:43.135 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:43.135 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:43.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:01:43.394 Hugepages 00:01:43.394 node hugesize free / total 00:01:43.394 node0 1048576kB 0 / 0 00:01:43.394 node0 2048kB 0 / 0 00:01:43.394 00:01:43.394 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:43.394 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:43.394 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:43.394 + rm -f /tmp/spdk-ld-path 00:01:43.394 + source autorun-spdk.conf 00:01:43.653 ++ SPDK_TEST_UNITTEST=1 00:01:43.653 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.653 ++ SPDK_TEST_NVME=1 00:01:43.653 ++ SPDK_TEST_BLOCKDEV=1 00:01:43.653 ++ SPDK_RUN_ASAN=1 00:01:43.653 ++ SPDK_RUN_UBSAN=1 00:01:43.653 ++ SPDK_TEST_RAID5=1 00:01:43.653 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.653 ++ RUN_NIGHTLY=0 00:01:43.653 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:43.653 + [[ -n '' ]] 00:01:43.653 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:43.653 + for M in /var/spdk/build-*-manifest.txt 00:01:43.653 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:43.653 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:43.653 + for M in /var/spdk/build-*-manifest.txt 00:01:43.653 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:43.653 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:43.653 ++ uname 00:01:43.653 + [[ Linux == \L\i\n\u\x ]] 00:01:43.653 + sudo dmesg -T 00:01:43.653 + sudo dmesg --clear 00:01:43.653 + dmesg_pid=2152 00:01:43.653 + [[ Ubuntu == FreeBSD ]] 00:01:43.653 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.653 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.653 + sudo dmesg -Tw 00:01:43.653 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:43.653 + [[ -x /usr/src/fio-static/fio ]] 00:01:43.653 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:43.653 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:43.653 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:43.653 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:43.653 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:43.653 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:43.653 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:43.653 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:43.653 Test configuration: 00:01:43.653 SPDK_TEST_UNITTEST=1 00:01:43.653 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.653 SPDK_TEST_NVME=1 00:01:43.653 SPDK_TEST_BLOCKDEV=1 00:01:43.653 SPDK_RUN_ASAN=1 00:01:43.653 SPDK_RUN_UBSAN=1 00:01:43.653 SPDK_TEST_RAID5=1 00:01:43.653 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.653 RUN_NIGHTLY=0 00:28:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:43.653 00:28:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:43.653 00:28:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.653 00:28:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.653 00:28:05 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:43.653 00:28:05 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:43.653 00:28:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:43.653 00:28:05 -- paths/export.sh@5 -- $ export PATH 00:01:43.653 00:28:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:43.653 00:28:05 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:43.653 00:28:05 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:43.653 00:28:05 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721867285.XXXXXX 00:01:43.653 00:28:05 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721867285.C3HQEA 00:01:43.653 00:28:05 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:43.653 00:28:05 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:43.653 00:28:05 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:43.653 00:28:05 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:43.653 00:28:05 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:43.653 00:28:05 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:43.654 00:28:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:43.654 00:28:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.654 00:28:05 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:43.654 00:28:05 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:43.654 00:28:05 -- pm/common@17 -- $ local monitor 00:01:43.654 00:28:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.654 00:28:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.654 00:28:05 -- pm/common@21 -- $ date +%s 00:01:43.654 00:28:05 -- pm/common@25 -- $ sleep 1 00:01:43.654 00:28:05 -- pm/common@21 -- $ date +%s 00:01:43.654 00:28:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721867285 00:01:43.654 00:28:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721867285 00:01:43.913 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721867285_collect-cpu-load.pm.log 00:01:43.913 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721867285_collect-vmstat.pm.log 00:01:44.849 00:28:06 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:44.849 00:28:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:44.849 00:28:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:44.849 00:28:06 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:44.849 00:28:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:44.849 Thu Jul 25 00:28:06 UTC 2024 00:01:44.849 00:28:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:44.849 v24.09-pre-312-g6e4acbb0d 00:01:44.849 00:28:06 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:44.849 00:28:06 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:44.849 00:28:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:44.849 00:28:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:44.849 00:28:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.849 ************************************ 00:01:44.849 START TEST asan 00:01:44.849 ************************************ 00:01:44.849 using asan 00:01:44.849 00:28:06 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:44.849 00:01:44.849 real 0m0.001s 00:01:44.849 user 0m0.000s 00:01:44.849 sys 0m0.001s 00:01:44.849 00:28:06 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:44.849 00:28:06 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.849 ************************************ 00:01:44.849 END TEST asan 00:01:44.849 ************************************ 00:01:44.849 00:28:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:44.849 00:28:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:44.849 00:28:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:44.849 00:28:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:44.849 00:28:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.849 ************************************ 00:01:44.849 START TEST ubsan 00:01:44.849 ************************************ 00:01:44.849 using ubsan 00:01:44.849 00:28:06 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:44.849 00:01:44.849 real 0m0.000s 00:01:44.849 user 0m0.000s 00:01:44.849 sys 0m0.000s 00:01:44.849 ************************************ 00:01:44.849 END TEST ubsan 00:01:44.849 00:28:06 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:44.849 00:28:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.849 ************************************ 00:01:44.849 00:28:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:44.849 00:28:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:44.849 00:28:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:44.849 00:28:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:44.849 00:28:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:44.849 00:28:06 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:44.849 00:28:06 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:44.849 00:28:06 -- common/autobuild_common.sh@423 -- $ run_test unittest_build _unittest_build 00:01:44.849 00:28:06 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:44.849 00:28:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:44.849 00:28:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.849 ************************************ 00:01:44.849 START TEST unittest_build 00:01:44.849 ************************************ 00:01:44.849 00:28:06 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:01:44.849 00:28:06 unittest_build -- common/autobuild_common.sh@414 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:45.108 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:45.108 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:45.367 Using 'verbs' RDMA provider 00:02:04.388 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:19.260 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:19.260 Creating mk/config.mk...done. 00:02:19.260 Creating mk/cc.flags.mk...done. 00:02:19.260 Type 'make' to build. 00:02:19.260 00:28:39 unittest_build -- common/autobuild_common.sh@415 -- $ make -j10 00:02:19.260 make[1]: Nothing to be done for 'all'. 00:02:34.136 The Meson build system 00:02:34.136 Version: 1.4.0 00:02:34.136 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:34.136 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:34.136 Build type: native build 00:02:34.136 Program cat found: YES (/usr/bin/cat) 00:02:34.136 Project name: DPDK 00:02:34.136 Project version: 24.03.0 00:02:34.136 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:34.136 C linker for the host machine: cc ld.bfd 2.38 00:02:34.136 Host machine cpu family: x86_64 00:02:34.136 Host machine cpu: x86_64 00:02:34.136 Message: ## Building in Developer Mode ## 00:02:34.136 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.136 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:34.136 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.136 Program python3 found: YES (/usr/bin/python3) 00:02:34.136 Program cat found: YES (/usr/bin/cat) 00:02:34.136 Compiler for C supports arguments -march=native: YES 00:02:34.136 Checking for size of "void *" : 8 00:02:34.136 Checking for size of "void *" : 8 (cached) 00:02:34.136 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:34.136 Library m found: YES 00:02:34.136 Library numa found: YES 00:02:34.136 Has header "numaif.h" : YES 00:02:34.136 Library fdt found: NO 00:02:34.136 Library execinfo found: NO 00:02:34.136 Has header "execinfo.h" : YES 00:02:34.136 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:34.136 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.136 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.136 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.136 Run-time dependency openssl found: YES 3.0.2 00:02:34.136 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:34.136 Library pcap found: NO 00:02:34.136 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.136 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.136 Compiler for C supports arguments -Wformat: YES 00:02:34.136 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:34.136 Compiler for C supports arguments -Wformat-security: YES 00:02:34.136 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.136 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.136 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.136 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.136 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.136 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.136 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.136 Compiler for C supports arguments -Wundef: YES 00:02:34.136 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.136 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.136 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.136 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.136 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.136 Program objdump found: YES (/usr/bin/objdump) 00:02:34.136 Compiler for C supports arguments -mavx512f: YES 00:02:34.136 Checking if "AVX512 checking" compiles: YES 00:02:34.136 Fetching value of define "__SSE4_2__" : 1 00:02:34.136 Fetching value of define "__AES__" : 1 00:02:34.136 Fetching value of define "__AVX__" : 1 00:02:34.136 Fetching value of define "__AVX2__" : 1 00:02:34.136 Fetching value of define "__AVX512BW__" : 1 00:02:34.136 Fetching value of define "__AVX512CD__" : 1 00:02:34.136 Fetching value of define "__AVX512DQ__" : 1 00:02:34.136 Fetching value of define "__AVX512F__" : 1 00:02:34.136 Fetching value of define "__AVX512VL__" : 1 00:02:34.136 Fetching value of define "__PCLMUL__" : 1 00:02:34.136 Fetching value of define "__RDRND__" : 1 00:02:34.137 Fetching value of define "__RDSEED__" : 1 00:02:34.137 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.137 Fetching value of define "__znver1__" : (undefined) 00:02:34.137 Fetching value of define "__znver2__" : (undefined) 00:02:34.137 Fetching value of define "__znver3__" : (undefined) 00:02:34.137 Fetching value of define "__znver4__" : (undefined) 00:02:34.137 Library asan found: YES 00:02:34.137 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.137 Message: lib/log: Defining dependency "log" 00:02:34.137 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.137 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.137 Library rt found: YES 00:02:34.137 Checking for function "getentropy" : NO 00:02:34.137 Message: lib/eal: Defining dependency "eal" 00:02:34.137 Message: lib/ring: Defining dependency "ring" 00:02:34.137 Message: lib/rcu: Defining dependency "rcu" 00:02:34.137 Message: lib/mempool: Defining dependency "mempool" 00:02:34.137 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.137 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.137 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.137 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.137 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.137 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:34.137 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:34.137 Compiler for C supports arguments -mpclmul: YES 00:02:34.137 Compiler for C supports arguments -maes: YES 00:02:34.137 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.137 Compiler for C supports arguments -mavx512bw: YES 00:02:34.137 Compiler for C supports arguments -mavx512dq: YES 00:02:34.137 Compiler for C supports arguments -mavx512vl: YES 00:02:34.137 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.137 Compiler for C supports arguments -mavx2: YES 00:02:34.137 Compiler for C supports arguments -mavx: YES 00:02:34.137 Message: lib/net: Defining dependency "net" 00:02:34.137 Message: lib/meter: Defining dependency "meter" 00:02:34.137 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.137 Message: lib/pci: Defining dependency "pci" 00:02:34.137 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.137 Message: lib/hash: Defining dependency "hash" 00:02:34.137 Message: lib/timer: Defining dependency "timer" 00:02:34.137 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.137 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.137 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.137 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.137 Message: lib/power: Defining dependency "power" 00:02:34.137 Message: lib/reorder: Defining dependency "reorder" 00:02:34.137 Message: lib/security: Defining dependency "security" 00:02:34.137 Has header "linux/userfaultfd.h" : YES 00:02:34.137 Has header "linux/vduse.h" : YES 00:02:34.137 Message: lib/vhost: Defining dependency "vhost" 00:02:34.137 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.137 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.137 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.137 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.137 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.137 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.137 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.137 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.137 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.137 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.137 Program doxygen found: YES (/usr/bin/doxygen) 00:02:34.137 Configuring doxy-api-html.conf using configuration 00:02:34.137 Configuring doxy-api-man.conf using configuration 00:02:34.137 Program mandb found: YES (/usr/bin/mandb) 00:02:34.137 Program sphinx-build found: NO 00:02:34.137 Configuring rte_build_config.h using configuration 00:02:34.137 Message: 00:02:34.137 ================= 00:02:34.137 Applications Enabled 00:02:34.137 ================= 00:02:34.137 00:02:34.137 apps: 00:02:34.137 00:02:34.137 00:02:34.137 Message: 00:02:34.137 ================= 00:02:34.137 Libraries Enabled 00:02:34.137 ================= 00:02:34.137 00:02:34.137 libs: 00:02:34.137 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.137 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.137 cryptodev, dmadev, power, reorder, security, vhost, 00:02:34.137 00:02:34.137 Message: 00:02:34.137 =============== 00:02:34.137 Drivers Enabled 00:02:34.137 =============== 00:02:34.137 00:02:34.137 common: 00:02:34.137 00:02:34.137 bus: 00:02:34.137 pci, vdev, 00:02:34.137 mempool: 00:02:34.137 ring, 00:02:34.137 dma: 00:02:34.137 00:02:34.137 net: 00:02:34.137 00:02:34.137 crypto: 00:02:34.137 00:02:34.137 compress: 00:02:34.137 00:02:34.137 vdpa: 00:02:34.137 00:02:34.137 00:02:34.137 Message: 00:02:34.137 ================= 00:02:34.137 Content Skipped 00:02:34.137 ================= 00:02:34.137 00:02:34.137 apps: 00:02:34.137 dumpcap: explicitly disabled via build config 00:02:34.137 graph: explicitly disabled via build config 00:02:34.137 pdump: explicitly disabled via build config 00:02:34.137 proc-info: explicitly disabled via build config 00:02:34.137 test-acl: explicitly disabled via build config 00:02:34.137 test-bbdev: explicitly disabled via build config 00:02:34.137 test-cmdline: explicitly disabled via build config 00:02:34.137 test-compress-perf: explicitly disabled via build config 00:02:34.137 test-crypto-perf: explicitly disabled via build config 00:02:34.137 test-dma-perf: explicitly disabled via build config 00:02:34.137 test-eventdev: explicitly disabled via build config 00:02:34.137 test-fib: explicitly disabled via build config 00:02:34.137 test-flow-perf: explicitly disabled via build config 00:02:34.137 test-gpudev: explicitly disabled via build config 00:02:34.137 test-mldev: explicitly disabled via build config 00:02:34.137 test-pipeline: explicitly disabled via build config 00:02:34.137 test-pmd: explicitly disabled via build config 00:02:34.137 test-regex: explicitly disabled via build config 00:02:34.137 test-sad: explicitly disabled via build config 00:02:34.137 test-security-perf: explicitly disabled via build config 00:02:34.137 00:02:34.137 libs: 00:02:34.137 argparse: explicitly disabled via build config 00:02:34.137 metrics: explicitly disabled via build config 00:02:34.137 acl: explicitly disabled via build config 00:02:34.137 bbdev: explicitly disabled via build config 00:02:34.137 bitratestats: explicitly disabled via build config 00:02:34.137 bpf: explicitly disabled via build config 00:02:34.137 cfgfile: explicitly disabled via build config 00:02:34.137 distributor: explicitly disabled via build config 00:02:34.137 efd: explicitly disabled via build config 00:02:34.137 eventdev: explicitly disabled via build config 00:02:34.137 dispatcher: explicitly disabled via build config 00:02:34.137 gpudev: explicitly disabled via build config 00:02:34.137 gro: explicitly disabled via build config 00:02:34.137 gso: explicitly disabled via build config 00:02:34.137 ip_frag: explicitly disabled via build config 00:02:34.137 jobstats: explicitly disabled via build config 00:02:34.137 latencystats: explicitly disabled via build config 00:02:34.137 lpm: explicitly disabled via build config 00:02:34.137 member: explicitly disabled via build config 00:02:34.137 pcapng: explicitly disabled via build config 00:02:34.137 rawdev: explicitly disabled via build config 00:02:34.137 regexdev: explicitly disabled via build config 00:02:34.137 mldev: explicitly disabled via build config 00:02:34.137 rib: explicitly disabled via build config 00:02:34.137 sched: explicitly disabled via build config 00:02:34.137 stack: explicitly disabled via build config 00:02:34.137 ipsec: explicitly disabled via build config 00:02:34.137 pdcp: explicitly disabled via build config 00:02:34.137 fib: explicitly disabled via build config 00:02:34.137 port: explicitly disabled via build config 00:02:34.137 pdump: explicitly disabled via build config 00:02:34.137 table: explicitly disabled via build config 00:02:34.137 pipeline: explicitly disabled via build config 00:02:34.137 graph: explicitly disabled via build config 00:02:34.137 node: explicitly disabled via build config 00:02:34.137 00:02:34.137 drivers: 00:02:34.137 common/cpt: not in enabled drivers build config 00:02:34.137 common/dpaax: not in enabled drivers build config 00:02:34.137 common/iavf: not in enabled drivers build config 00:02:34.137 common/idpf: not in enabled drivers build config 00:02:34.137 common/ionic: not in enabled drivers build config 00:02:34.137 common/mvep: not in enabled drivers build config 00:02:34.137 common/octeontx: not in enabled drivers build config 00:02:34.137 bus/auxiliary: not in enabled drivers build config 00:02:34.137 bus/cdx: not in enabled drivers build config 00:02:34.137 bus/dpaa: not in enabled drivers build config 00:02:34.137 bus/fslmc: not in enabled drivers build config 00:02:34.137 bus/ifpga: not in enabled drivers build config 00:02:34.137 bus/platform: not in enabled drivers build config 00:02:34.137 bus/uacce: not in enabled drivers build config 00:02:34.137 bus/vmbus: not in enabled drivers build config 00:02:34.137 common/cnxk: not in enabled drivers build config 00:02:34.137 common/mlx5: not in enabled drivers build config 00:02:34.137 common/nfp: not in enabled drivers build config 00:02:34.137 common/nitrox: not in enabled drivers build config 00:02:34.137 common/qat: not in enabled drivers build config 00:02:34.137 common/sfc_efx: not in enabled drivers build config 00:02:34.137 mempool/bucket: not in enabled drivers build config 00:02:34.137 mempool/cnxk: not in enabled drivers build config 00:02:34.137 mempool/dpaa: not in enabled drivers build config 00:02:34.137 mempool/dpaa2: not in enabled drivers build config 00:02:34.137 mempool/octeontx: not in enabled drivers build config 00:02:34.137 mempool/stack: not in enabled drivers build config 00:02:34.137 dma/cnxk: not in enabled drivers build config 00:02:34.137 dma/dpaa: not in enabled drivers build config 00:02:34.138 dma/dpaa2: not in enabled drivers build config 00:02:34.138 dma/hisilicon: not in enabled drivers build config 00:02:34.138 dma/idxd: not in enabled drivers build config 00:02:34.138 dma/ioat: not in enabled drivers build config 00:02:34.138 dma/skeleton: not in enabled drivers build config 00:02:34.138 net/af_packet: not in enabled drivers build config 00:02:34.138 net/af_xdp: not in enabled drivers build config 00:02:34.138 net/ark: not in enabled drivers build config 00:02:34.138 net/atlantic: not in enabled drivers build config 00:02:34.138 net/avp: not in enabled drivers build config 00:02:34.138 net/axgbe: not in enabled drivers build config 00:02:34.138 net/bnx2x: not in enabled drivers build config 00:02:34.138 net/bnxt: not in enabled drivers build config 00:02:34.138 net/bonding: not in enabled drivers build config 00:02:34.138 net/cnxk: not in enabled drivers build config 00:02:34.138 net/cpfl: not in enabled drivers build config 00:02:34.138 net/cxgbe: not in enabled drivers build config 00:02:34.138 net/dpaa: not in enabled drivers build config 00:02:34.138 net/dpaa2: not in enabled drivers build config 00:02:34.138 net/e1000: not in enabled drivers build config 00:02:34.138 net/ena: not in enabled drivers build config 00:02:34.138 net/enetc: not in enabled drivers build config 00:02:34.138 net/enetfec: not in enabled drivers build config 00:02:34.138 net/enic: not in enabled drivers build config 00:02:34.138 net/failsafe: not in enabled drivers build config 00:02:34.138 net/fm10k: not in enabled drivers build config 00:02:34.138 net/gve: not in enabled drivers build config 00:02:34.138 net/hinic: not in enabled drivers build config 00:02:34.138 net/hns3: not in enabled drivers build config 00:02:34.138 net/i40e: not in enabled drivers build config 00:02:34.138 net/iavf: not in enabled drivers build config 00:02:34.138 net/ice: not in enabled drivers build config 00:02:34.138 net/idpf: not in enabled drivers build config 00:02:34.138 net/igc: not in enabled drivers build config 00:02:34.138 net/ionic: not in enabled drivers build config 00:02:34.138 net/ipn3ke: not in enabled drivers build config 00:02:34.138 net/ixgbe: not in enabled drivers build config 00:02:34.138 net/mana: not in enabled drivers build config 00:02:34.138 net/memif: not in enabled drivers build config 00:02:34.138 net/mlx4: not in enabled drivers build config 00:02:34.138 net/mlx5: not in enabled drivers build config 00:02:34.138 net/mvneta: not in enabled drivers build config 00:02:34.138 net/mvpp2: not in enabled drivers build config 00:02:34.138 net/netvsc: not in enabled drivers build config 00:02:34.138 net/nfb: not in enabled drivers build config 00:02:34.138 net/nfp: not in enabled drivers build config 00:02:34.138 net/ngbe: not in enabled drivers build config 00:02:34.138 net/null: not in enabled drivers build config 00:02:34.138 net/octeontx: not in enabled drivers build config 00:02:34.138 net/octeon_ep: not in enabled drivers build config 00:02:34.138 net/pcap: not in enabled drivers build config 00:02:34.138 net/pfe: not in enabled drivers build config 00:02:34.138 net/qede: not in enabled drivers build config 00:02:34.138 net/ring: not in enabled drivers build config 00:02:34.138 net/sfc: not in enabled drivers build config 00:02:34.138 net/softnic: not in enabled drivers build config 00:02:34.138 net/tap: not in enabled drivers build config 00:02:34.138 net/thunderx: not in enabled drivers build config 00:02:34.138 net/txgbe: not in enabled drivers build config 00:02:34.138 net/vdev_netvsc: not in enabled drivers build config 00:02:34.138 net/vhost: not in enabled drivers build config 00:02:34.138 net/virtio: not in enabled drivers build config 00:02:34.138 net/vmxnet3: not in enabled drivers build config 00:02:34.138 raw/*: missing internal dependency, "rawdev" 00:02:34.138 crypto/armv8: not in enabled drivers build config 00:02:34.138 crypto/bcmfs: not in enabled drivers build config 00:02:34.138 crypto/caam_jr: not in enabled drivers build config 00:02:34.138 crypto/ccp: not in enabled drivers build config 00:02:34.138 crypto/cnxk: not in enabled drivers build config 00:02:34.138 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.138 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.138 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.138 crypto/mlx5: not in enabled drivers build config 00:02:34.138 crypto/mvsam: not in enabled drivers build config 00:02:34.138 crypto/nitrox: not in enabled drivers build config 00:02:34.138 crypto/null: not in enabled drivers build config 00:02:34.138 crypto/octeontx: not in enabled drivers build config 00:02:34.138 crypto/openssl: not in enabled drivers build config 00:02:34.138 crypto/scheduler: not in enabled drivers build config 00:02:34.138 crypto/uadk: not in enabled drivers build config 00:02:34.138 crypto/virtio: not in enabled drivers build config 00:02:34.138 compress/isal: not in enabled drivers build config 00:02:34.138 compress/mlx5: not in enabled drivers build config 00:02:34.138 compress/nitrox: not in enabled drivers build config 00:02:34.138 compress/octeontx: not in enabled drivers build config 00:02:34.138 compress/zlib: not in enabled drivers build config 00:02:34.138 regex/*: missing internal dependency, "regexdev" 00:02:34.138 ml/*: missing internal dependency, "mldev" 00:02:34.138 vdpa/ifc: not in enabled drivers build config 00:02:34.138 vdpa/mlx5: not in enabled drivers build config 00:02:34.138 vdpa/nfp: not in enabled drivers build config 00:02:34.138 vdpa/sfc: not in enabled drivers build config 00:02:34.138 event/*: missing internal dependency, "eventdev" 00:02:34.138 baseband/*: missing internal dependency, "bbdev" 00:02:34.138 gpu/*: missing internal dependency, "gpudev" 00:02:34.138 00:02:34.138 00:02:34.138 Build targets in project: 85 00:02:34.138 00:02:34.138 DPDK 24.03.0 00:02:34.138 00:02:34.138 User defined options 00:02:34.138 buildtype : debug 00:02:34.138 default_library : static 00:02:34.138 libdir : lib 00:02:34.138 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:34.138 b_sanitize : address 00:02:34.138 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:02:34.138 c_link_args : 00:02:34.138 cpu_instruction_set: native 00:02:34.138 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:02:34.138 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,argparse,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:02:34.138 enable_docs : false 00:02:34.138 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:34.138 enable_kmods : false 00:02:34.138 max_lcores : 128 00:02:34.138 tests : false 00:02:34.138 00:02:34.138 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.138 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:34.138 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.138 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.138 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.138 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.138 [5/268] Linking static target lib/librte_kvargs.a 00:02:34.397 [6/268] Linking static target lib/librte_log.a 00:02:34.397 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.397 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.681 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:34.681 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.681 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.681 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:34.681 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:34.958 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:34.958 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:34.958 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:34.958 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:34.958 [18/268] Linking static target lib/librte_telemetry.a 00:02:35.216 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.216 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.216 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.216 [22/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.216 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.216 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.216 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.475 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:35.475 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.475 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:35.475 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:35.734 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:35.734 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:35.734 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:35.734 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:35.734 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:35.734 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:35.734 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:35.993 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:35.993 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:35.993 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:35.993 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:35.993 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.993 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:36.252 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:36.252 [44/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.252 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:36.252 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:36.252 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:36.252 [48/268] Linking target lib/librte_log.so.24.1 00:02:36.252 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:36.252 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:36.252 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:36.252 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:36.511 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:36.511 [54/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:36.511 [55/268] Linking target lib/librte_kvargs.so.24.1 00:02:36.511 [56/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.511 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:36.511 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:36.511 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:36.511 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:36.511 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:36.511 [62/268] Linking target lib/librte_telemetry.so.24.1 00:02:36.511 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:36.769 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:36.769 [65/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:36.769 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:36.769 [67/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:36.769 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:36.769 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:37.028 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:37.028 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:37.028 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:37.028 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.028 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:37.028 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:37.028 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:37.028 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:37.028 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:37.028 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:37.028 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:37.287 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:37.287 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:37.287 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:37.287 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:37.287 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:37.287 [86/268] Linking static target lib/librte_eal.a 00:02:37.287 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:37.287 [88/268] Linking static target lib/librte_ring.a 00:02:37.546 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:37.546 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:37.546 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:37.546 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:37.546 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:37.546 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:37.546 [95/268] Linking static target lib/librte_mempool.a 00:02:37.546 [96/268] Linking static target lib/librte_rcu.a 00:02:37.805 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:37.805 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:37.805 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:37.805 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:37.805 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.064 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.064 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.064 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:38.064 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.323 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:38.323 [107/268] Linking static target lib/librte_meter.a 00:02:38.323 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:38.323 [109/268] Linking static target lib/librte_net.a 00:02:38.581 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:38.581 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:38.581 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:38.581 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:38.581 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.839 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.839 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.839 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:38.839 [118/268] Linking static target lib/librte_mbuf.a 00:02:38.839 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:39.098 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:39.098 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:39.356 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:39.356 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:39.356 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:39.356 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:39.356 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:39.357 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:39.615 [128/268] Linking static target lib/librte_pci.a 00:02:39.615 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:39.615 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:39.615 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:39.615 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.615 [133/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.874 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.874 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.874 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:39.874 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.874 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:39.874 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.874 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:39.874 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.874 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.874 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.874 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:40.133 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:40.133 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:40.133 [147/268] Linking static target lib/librte_cmdline.a 00:02:40.133 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:40.402 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.402 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:40.402 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:40.402 [152/268] Linking static target lib/librte_timer.a 00:02:40.402 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:40.402 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:40.675 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:40.675 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:40.675 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:40.675 [158/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:40.675 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:40.936 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:40.936 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.936 [162/268] Linking static target lib/librte_compressdev.a 00:02:40.936 [163/268] Linking static target lib/librte_ethdev.a 00:02:40.936 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:40.936 [165/268] Linking static target lib/librte_hash.a 00:02:40.936 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:40.936 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:40.936 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:41.198 [169/268] Linking static target lib/librte_dmadev.a 00:02:41.198 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:41.198 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:41.198 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.198 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:41.456 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:41.456 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.456 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:41.714 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:41.714 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.714 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:41.714 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:41.714 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.714 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:41.972 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:41.972 [184/268] Linking static target lib/librte_power.a 00:02:42.230 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:42.230 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:42.230 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:42.230 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:42.230 [189/268] Linking static target lib/librte_reorder.a 00:02:42.230 [190/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:42.230 [191/268] Linking static target lib/librte_cryptodev.a 00:02:42.230 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:42.230 [193/268] Linking static target lib/librte_security.a 00:02:42.489 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.489 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:42.748 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.007 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:43.007 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.007 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:43.007 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:43.007 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:43.266 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:43.266 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:43.266 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:43.266 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:43.266 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:43.524 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:43.524 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:43.524 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:43.524 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:43.783 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:43.783 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.783 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.783 [214/268] Linking static target drivers/librte_bus_pci.a 00:02:43.783 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:43.783 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:43.783 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:43.783 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:44.042 [219/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.042 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.042 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.042 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.301 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:44.301 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.301 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.301 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:44.301 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.677 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.073 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.073 [230/268] Linking target lib/librte_eal.so.24.1 00:02:47.331 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:47.331 [232/268] Linking target lib/librte_dmadev.so.24.1 00:02:47.331 [233/268] Linking target lib/librte_pci.so.24.1 00:02:47.331 [234/268] Linking target lib/librte_meter.so.24.1 00:02:47.331 [235/268] Linking target lib/librte_ring.so.24.1 00:02:47.331 [236/268] Linking target lib/librte_timer.so.24.1 00:02:47.331 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:47.590 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:47.591 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:47.591 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:47.591 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:47.591 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:47.591 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:47.591 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:47.591 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:47.849 [246/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.849 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:47.849 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:47.849 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:47.849 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:48.107 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:48.107 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:48.107 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:48.107 [254/268] Linking target lib/librte_net.so.24.1 00:02:48.107 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:48.365 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:48.365 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:48.365 [258/268] Linking target lib/librte_hash.so.24.1 00:02:48.365 [259/268] Linking target lib/librte_security.so.24.1 00:02:48.365 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:48.365 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:48.365 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:48.623 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:48.623 [264/268] Linking target lib/librte_power.so.24.1 00:02:50.524 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:50.524 [266/268] Linking static target lib/librte_vhost.a 00:02:52.426 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.426 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:52.426 INFO: autodetecting backend as ninja 00:02:52.426 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:53.361 CC lib/ut/ut.o 00:02:53.361 CC lib/log/log.o 00:02:53.361 CC lib/log/log_flags.o 00:02:53.361 CC lib/log/log_deprecated.o 00:02:53.361 CC lib/ut_mock/mock.o 00:02:53.361 LIB libspdk_ut_mock.a 00:02:53.361 LIB libspdk_ut.a 00:02:53.618 LIB libspdk_log.a 00:02:53.618 CC lib/util/base64.o 00:02:53.618 CC lib/ioat/ioat.o 00:02:53.618 CC lib/util/bit_array.o 00:02:53.618 CC lib/util/cpuset.o 00:02:53.618 CC lib/util/crc16.o 00:02:53.618 CC lib/dma/dma.o 00:02:53.618 CC lib/util/crc32.o 00:02:53.618 CXX lib/trace_parser/trace.o 00:02:53.618 CC lib/util/crc32c.o 00:02:53.877 CC lib/vfio_user/host/vfio_user_pci.o 00:02:53.877 CC lib/util/crc32_ieee.o 00:02:53.877 LIB libspdk_dma.a 00:02:53.877 CC lib/util/crc64.o 00:02:53.877 CC lib/util/dif.o 00:02:53.877 CC lib/util/fd.o 00:02:54.135 CC lib/util/fd_group.o 00:02:54.135 CC lib/vfio_user/host/vfio_user.o 00:02:54.135 CC lib/util/file.o 00:02:54.135 CC lib/util/hexlify.o 00:02:54.135 CC lib/util/iov.o 00:02:54.135 CC lib/util/math.o 00:02:54.135 LIB libspdk_ioat.a 00:02:54.135 CC lib/util/net.o 00:02:54.135 CC lib/util/pipe.o 00:02:54.135 CC lib/util/strerror_tls.o 00:02:54.135 CC lib/util/string.o 00:02:54.392 CC lib/util/uuid.o 00:02:54.392 CC lib/util/xor.o 00:02:54.392 LIB libspdk_vfio_user.a 00:02:54.392 CC lib/util/zipf.o 00:02:54.650 LIB libspdk_util.a 00:02:55.217 LIB libspdk_trace_parser.a 00:02:55.217 CC lib/rdma_provider/common.o 00:02:55.217 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:55.217 CC lib/json/json_parse.o 00:02:55.217 CC lib/json/json_util.o 00:02:55.217 CC lib/vmd/vmd.o 00:02:55.217 CC lib/rdma_utils/rdma_utils.o 00:02:55.217 CC lib/conf/conf.o 00:02:55.217 CC lib/env_dpdk/env.o 00:02:55.217 CC lib/idxd/idxd.o 00:02:55.217 CC lib/idxd/idxd_user.o 00:02:55.217 CC lib/env_dpdk/memory.o 00:02:55.476 CC lib/env_dpdk/pci.o 00:02:55.476 LIB libspdk_rdma_provider.a 00:02:55.476 LIB libspdk_rdma_utils.a 00:02:55.476 CC lib/env_dpdk/init.o 00:02:55.476 LIB libspdk_conf.a 00:02:55.476 CC lib/env_dpdk/threads.o 00:02:55.476 CC lib/env_dpdk/pci_ioat.o 00:02:55.476 CC lib/json/json_write.o 00:02:55.476 CC lib/env_dpdk/pci_virtio.o 00:02:55.735 CC lib/env_dpdk/pci_vmd.o 00:02:55.735 CC lib/env_dpdk/pci_idxd.o 00:02:55.735 CC lib/vmd/led.o 00:02:55.735 CC lib/env_dpdk/pci_event.o 00:02:55.735 CC lib/env_dpdk/sigbus_handler.o 00:02:55.735 CC lib/env_dpdk/pci_dpdk.o 00:02:56.002 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:56.002 LIB libspdk_json.a 00:02:56.002 LIB libspdk_idxd.a 00:02:56.002 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:56.276 LIB libspdk_vmd.a 00:02:56.276 CC lib/jsonrpc/jsonrpc_server.o 00:02:56.276 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:56.276 CC lib/jsonrpc/jsonrpc_client.o 00:02:56.276 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:56.534 LIB libspdk_jsonrpc.a 00:02:56.793 CC lib/rpc/rpc.o 00:02:56.793 LIB libspdk_env_dpdk.a 00:02:57.051 LIB libspdk_rpc.a 00:02:57.308 CC lib/trace/trace_flags.o 00:02:57.308 CC lib/trace/trace.o 00:02:57.308 CC lib/trace/trace_rpc.o 00:02:57.308 CC lib/notify/notify.o 00:02:57.308 CC lib/notify/notify_rpc.o 00:02:57.308 CC lib/keyring/keyring.o 00:02:57.308 CC lib/keyring/keyring_rpc.o 00:02:57.567 LIB libspdk_notify.a 00:02:57.567 LIB libspdk_trace.a 00:02:57.567 LIB libspdk_keyring.a 00:02:57.824 CC lib/thread/iobuf.o 00:02:57.824 CC lib/thread/thread.o 00:02:57.824 CC lib/sock/sock_rpc.o 00:02:57.824 CC lib/sock/sock.o 00:02:58.390 LIB libspdk_sock.a 00:02:58.647 CC lib/nvme/nvme_fabric.o 00:02:58.647 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:58.647 CC lib/nvme/nvme_ctrlr.o 00:02:58.647 CC lib/nvme/nvme_ns_cmd.o 00:02:58.647 CC lib/nvme/nvme_ns.o 00:02:58.647 CC lib/nvme/nvme_qpair.o 00:02:58.647 CC lib/nvme/nvme_pcie_common.o 00:02:58.647 CC lib/nvme/nvme_pcie.o 00:02:58.647 CC lib/nvme/nvme.o 00:02:59.213 CC lib/nvme/nvme_quirks.o 00:02:59.213 CC lib/nvme/nvme_transport.o 00:02:59.213 CC lib/nvme/nvme_discovery.o 00:02:59.213 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.473 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.473 CC lib/nvme/nvme_tcp.o 00:02:59.473 CC lib/nvme/nvme_opal.o 00:02:59.473 LIB libspdk_thread.a 00:02:59.731 CC lib/nvme/nvme_poll_group.o 00:02:59.731 CC lib/nvme/nvme_io_msg.o 00:02:59.731 CC lib/nvme/nvme_zns.o 00:02:59.731 CC lib/nvme/nvme_stubs.o 00:02:59.731 CC lib/nvme/nvme_auth.o 00:02:59.731 CC lib/nvme/nvme_cuse.o 00:02:59.989 CC lib/nvme/nvme_rdma.o 00:03:00.247 CC lib/accel/accel.o 00:03:00.247 CC lib/accel/accel_rpc.o 00:03:00.247 CC lib/blob/blobstore.o 00:03:00.247 CC lib/init/json_config.o 00:03:00.247 CC lib/virtio/virtio.o 00:03:00.505 CC lib/accel/accel_sw.o 00:03:00.505 CC lib/blob/request.o 00:03:00.505 CC lib/init/subsystem.o 00:03:00.793 CC lib/blob/zeroes.o 00:03:00.793 CC lib/virtio/virtio_vhost_user.o 00:03:00.793 CC lib/init/subsystem_rpc.o 00:03:00.793 CC lib/blob/blob_bs_dev.o 00:03:00.793 CC lib/init/rpc.o 00:03:00.793 CC lib/virtio/virtio_vfio_user.o 00:03:00.793 CC lib/virtio/virtio_pci.o 00:03:01.051 LIB libspdk_init.a 00:03:01.308 LIB libspdk_virtio.a 00:03:01.308 LIB libspdk_nvme.a 00:03:01.308 CC lib/event/reactor.o 00:03:01.308 CC lib/event/app.o 00:03:01.308 CC lib/event/log_rpc.o 00:03:01.308 CC lib/event/app_rpc.o 00:03:01.308 CC lib/event/scheduler_static.o 00:03:01.308 LIB libspdk_accel.a 00:03:01.874 CC lib/bdev/bdev.o 00:03:01.874 CC lib/bdev/bdev_zone.o 00:03:01.874 CC lib/bdev/bdev_rpc.o 00:03:01.874 CC lib/bdev/part.o 00:03:01.874 CC lib/bdev/scsi_nvme.o 00:03:01.874 LIB libspdk_event.a 00:03:04.404 LIB libspdk_blob.a 00:03:04.404 CC lib/lvol/lvol.o 00:03:04.404 CC lib/blobfs/blobfs.o 00:03:04.404 CC lib/blobfs/tree.o 00:03:04.663 LIB libspdk_bdev.a 00:03:04.921 CC lib/nvmf/ctrlr.o 00:03:04.921 CC lib/nvmf/ctrlr_discovery.o 00:03:04.921 CC lib/nvmf/ctrlr_bdev.o 00:03:04.921 CC lib/nvmf/subsystem.o 00:03:04.921 CC lib/nbd/nbd.o 00:03:04.921 CC lib/nvmf/nvmf.o 00:03:04.921 CC lib/scsi/dev.o 00:03:04.922 CC lib/ftl/ftl_core.o 00:03:05.181 LIB libspdk_blobfs.a 00:03:05.181 CC lib/ftl/ftl_init.o 00:03:05.181 CC lib/scsi/lun.o 00:03:05.439 LIB libspdk_lvol.a 00:03:05.439 CC lib/ftl/ftl_layout.o 00:03:05.439 CC lib/ftl/ftl_debug.o 00:03:05.439 CC lib/ftl/ftl_io.o 00:03:05.439 CC lib/ftl/ftl_sb.o 00:03:05.439 CC lib/nbd/nbd_rpc.o 00:03:05.697 CC lib/scsi/port.o 00:03:05.697 CC lib/ftl/ftl_l2p.o 00:03:05.697 CC lib/ftl/ftl_l2p_flat.o 00:03:05.697 CC lib/ftl/ftl_nv_cache.o 00:03:05.697 LIB libspdk_nbd.a 00:03:05.697 CC lib/scsi/scsi.o 00:03:05.697 CC lib/ftl/ftl_band.o 00:03:05.975 CC lib/ftl/ftl_band_ops.o 00:03:05.975 CC lib/ftl/ftl_writer.o 00:03:05.975 CC lib/scsi/scsi_bdev.o 00:03:05.975 CC lib/ftl/ftl_rq.o 00:03:05.975 CC lib/ftl/ftl_reloc.o 00:03:05.975 CC lib/nvmf/nvmf_rpc.o 00:03:06.239 CC lib/nvmf/transport.o 00:03:06.239 CC lib/nvmf/tcp.o 00:03:06.239 CC lib/ftl/ftl_l2p_cache.o 00:03:06.239 CC lib/nvmf/stubs.o 00:03:06.239 CC lib/nvmf/mdns_server.o 00:03:06.498 CC lib/scsi/scsi_pr.o 00:03:06.498 CC lib/scsi/scsi_rpc.o 00:03:06.498 CC lib/ftl/ftl_p2l.o 00:03:06.756 CC lib/nvmf/rdma.o 00:03:06.756 CC lib/scsi/task.o 00:03:06.756 CC lib/nvmf/auth.o 00:03:06.756 CC lib/ftl/mngt/ftl_mngt.o 00:03:06.756 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:07.015 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:07.015 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:07.015 LIB libspdk_scsi.a 00:03:07.015 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:07.015 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:07.015 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:07.015 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:07.015 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:07.381 CC lib/vhost/vhost.o 00:03:07.381 CC lib/iscsi/conn.o 00:03:07.381 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:07.381 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:07.381 CC lib/vhost/vhost_rpc.o 00:03:07.381 CC lib/vhost/vhost_scsi.o 00:03:07.381 CC lib/vhost/vhost_blk.o 00:03:07.381 CC lib/vhost/rte_vhost_user.o 00:03:07.640 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:07.640 CC lib/iscsi/init_grp.o 00:03:07.640 CC lib/iscsi/iscsi.o 00:03:07.899 CC lib/iscsi/md5.o 00:03:07.899 CC lib/iscsi/param.o 00:03:07.899 CC lib/iscsi/portal_grp.o 00:03:07.899 CC lib/iscsi/tgt_node.o 00:03:07.899 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:07.899 CC lib/iscsi/iscsi_subsystem.o 00:03:08.157 CC lib/iscsi/iscsi_rpc.o 00:03:08.157 CC lib/iscsi/task.o 00:03:08.157 CC lib/ftl/utils/ftl_conf.o 00:03:08.157 CC lib/ftl/utils/ftl_md.o 00:03:08.416 CC lib/ftl/utils/ftl_mempool.o 00:03:08.416 CC lib/ftl/utils/ftl_bitmap.o 00:03:08.416 CC lib/ftl/utils/ftl_property.o 00:03:08.416 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.416 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.416 LIB libspdk_vhost.a 00:03:08.416 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.416 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.416 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.675 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.675 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:08.675 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:08.675 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:08.675 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:08.675 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:08.675 CC lib/ftl/base/ftl_base_dev.o 00:03:08.675 CC lib/ftl/base/ftl_base_bdev.o 00:03:08.934 CC lib/ftl/ftl_trace.o 00:03:08.934 LIB libspdk_nvmf.a 00:03:09.193 LIB libspdk_ftl.a 00:03:09.193 LIB libspdk_iscsi.a 00:03:09.761 CC module/env_dpdk/env_dpdk_rpc.o 00:03:09.761 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:09.761 CC module/accel/error/accel_error.o 00:03:09.761 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:09.761 CC module/accel/ioat/accel_ioat.o 00:03:09.761 CC module/sock/posix/posix.o 00:03:09.761 CC module/accel/iaa/accel_iaa.o 00:03:09.761 CC module/keyring/file/keyring.o 00:03:09.761 CC module/accel/dsa/accel_dsa.o 00:03:09.761 CC module/blob/bdev/blob_bdev.o 00:03:09.761 LIB libspdk_env_dpdk_rpc.a 00:03:09.761 CC module/accel/iaa/accel_iaa_rpc.o 00:03:10.021 LIB libspdk_scheduler_dpdk_governor.a 00:03:10.021 CC module/keyring/file/keyring_rpc.o 00:03:10.021 CC module/accel/error/accel_error_rpc.o 00:03:10.021 CC module/accel/dsa/accel_dsa_rpc.o 00:03:10.021 LIB libspdk_scheduler_dynamic.a 00:03:10.021 CC module/accel/ioat/accel_ioat_rpc.o 00:03:10.021 LIB libspdk_accel_iaa.a 00:03:10.021 LIB libspdk_accel_error.a 00:03:10.021 LIB libspdk_accel_dsa.a 00:03:10.021 LIB libspdk_blob_bdev.a 00:03:10.279 CC module/keyring/linux/keyring.o 00:03:10.279 CC module/keyring/linux/keyring_rpc.o 00:03:10.279 CC module/scheduler/gscheduler/gscheduler.o 00:03:10.279 LIB libspdk_accel_ioat.a 00:03:10.279 LIB libspdk_keyring_file.a 00:03:10.279 LIB libspdk_keyring_linux.a 00:03:10.279 LIB libspdk_scheduler_gscheduler.a 00:03:10.279 CC module/bdev/delay/vbdev_delay.o 00:03:10.279 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:10.537 CC module/bdev/error/vbdev_error.o 00:03:10.537 CC module/bdev/gpt/gpt.o 00:03:10.537 CC module/blobfs/bdev/blobfs_bdev.o 00:03:10.537 CC module/bdev/malloc/bdev_malloc.o 00:03:10.537 CC module/bdev/lvol/vbdev_lvol.o 00:03:10.537 CC module/bdev/null/bdev_null.o 00:03:10.537 CC module/bdev/nvme/bdev_nvme.o 00:03:10.537 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:10.537 CC module/bdev/gpt/vbdev_gpt.o 00:03:10.537 LIB libspdk_sock_posix.a 00:03:10.537 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:10.795 CC module/bdev/error/vbdev_error_rpc.o 00:03:10.795 CC module/bdev/null/bdev_null_rpc.o 00:03:10.795 LIB libspdk_blobfs_bdev.a 00:03:10.795 LIB libspdk_bdev_delay.a 00:03:10.795 CC module/bdev/passthru/vbdev_passthru.o 00:03:10.795 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:10.795 LIB libspdk_bdev_error.a 00:03:11.053 LIB libspdk_bdev_gpt.a 00:03:11.053 LIB libspdk_bdev_null.a 00:03:11.053 CC module/bdev/nvme/nvme_rpc.o 00:03:11.053 CC module/bdev/nvme/bdev_mdns_client.o 00:03:11.053 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:11.053 CC module/bdev/raid/bdev_raid.o 00:03:11.053 CC module/bdev/raid/bdev_raid_rpc.o 00:03:11.053 LIB libspdk_bdev_lvol.a 00:03:11.053 CC module/bdev/raid/bdev_raid_sb.o 00:03:11.053 CC module/bdev/split/vbdev_split.o 00:03:11.310 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:11.310 LIB libspdk_bdev_malloc.a 00:03:11.310 CC module/bdev/raid/raid0.o 00:03:11.310 CC module/bdev/raid/raid1.o 00:03:11.310 CC module/bdev/raid/concat.o 00:03:11.310 CC module/bdev/nvme/vbdev_opal.o 00:03:11.310 LIB libspdk_bdev_passthru.a 00:03:11.310 CC module/bdev/split/vbdev_split_rpc.o 00:03:11.310 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:11.568 CC module/bdev/raid/raid5f.o 00:03:11.568 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:11.568 LIB libspdk_bdev_split.a 00:03:11.825 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:11.825 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:11.825 CC module/bdev/aio/bdev_aio.o 00:03:11.825 CC module/bdev/aio/bdev_aio_rpc.o 00:03:11.825 CC module/bdev/ftl/bdev_ftl.o 00:03:11.825 CC module/bdev/iscsi/bdev_iscsi.o 00:03:11.825 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:11.825 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:11.825 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.083 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.083 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.083 LIB libspdk_bdev_zone_block.a 00:03:12.083 LIB libspdk_bdev_aio.a 00:03:12.083 LIB libspdk_bdev_raid.a 00:03:12.339 LIB libspdk_bdev_iscsi.a 00:03:12.339 LIB libspdk_bdev_ftl.a 00:03:12.339 LIB libspdk_bdev_virtio.a 00:03:13.274 LIB libspdk_bdev_nvme.a 00:03:13.533 CC module/event/subsystems/scheduler/scheduler.o 00:03:13.533 CC module/event/subsystems/vmd/vmd.o 00:03:13.533 CC module/event/subsystems/sock/sock.o 00:03:13.533 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:13.533 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:13.533 CC module/event/subsystems/iobuf/iobuf.o 00:03:13.533 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:13.533 CC module/event/subsystems/keyring/keyring.o 00:03:13.792 LIB libspdk_event_scheduler.a 00:03:13.792 LIB libspdk_event_vhost_blk.a 00:03:13.792 LIB libspdk_event_vmd.a 00:03:13.792 LIB libspdk_event_sock.a 00:03:13.792 LIB libspdk_event_keyring.a 00:03:13.792 LIB libspdk_event_iobuf.a 00:03:14.051 CC module/event/subsystems/accel/accel.o 00:03:14.309 LIB libspdk_event_accel.a 00:03:14.568 CC module/event/subsystems/bdev/bdev.o 00:03:14.859 LIB libspdk_event_bdev.a 00:03:15.129 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:15.129 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:15.129 CC module/event/subsystems/nbd/nbd.o 00:03:15.129 CC module/event/subsystems/scsi/scsi.o 00:03:15.129 LIB libspdk_event_nbd.a 00:03:15.129 LIB libspdk_event_scsi.a 00:03:15.387 LIB libspdk_event_nvmf.a 00:03:15.387 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:15.387 CC module/event/subsystems/iscsi/iscsi.o 00:03:15.645 LIB libspdk_event_vhost_scsi.a 00:03:15.645 LIB libspdk_event_iscsi.a 00:03:15.904 CC test/rpc_client/rpc_client_test.o 00:03:15.904 TEST_HEADER include/spdk/accel.h 00:03:15.904 TEST_HEADER include/spdk/accel_module.h 00:03:16.162 CXX app/trace/trace.o 00:03:16.162 TEST_HEADER include/spdk/assert.h 00:03:16.162 TEST_HEADER include/spdk/barrier.h 00:03:16.162 TEST_HEADER include/spdk/base64.h 00:03:16.162 TEST_HEADER include/spdk/bdev.h 00:03:16.162 TEST_HEADER include/spdk/bdev_module.h 00:03:16.162 TEST_HEADER include/spdk/bdev_zone.h 00:03:16.162 TEST_HEADER include/spdk/bit_array.h 00:03:16.162 TEST_HEADER include/spdk/bit_pool.h 00:03:16.162 TEST_HEADER include/spdk/blob.h 00:03:16.162 TEST_HEADER include/spdk/blob_bdev.h 00:03:16.162 TEST_HEADER include/spdk/blobfs.h 00:03:16.162 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:16.162 TEST_HEADER include/spdk/conf.h 00:03:16.162 TEST_HEADER include/spdk/config.h 00:03:16.162 TEST_HEADER include/spdk/cpuset.h 00:03:16.162 TEST_HEADER include/spdk/crc16.h 00:03:16.162 TEST_HEADER include/spdk/crc32.h 00:03:16.162 TEST_HEADER include/spdk/crc64.h 00:03:16.162 TEST_HEADER include/spdk/dif.h 00:03:16.162 TEST_HEADER include/spdk/dma.h 00:03:16.162 TEST_HEADER include/spdk/endian.h 00:03:16.162 TEST_HEADER include/spdk/env.h 00:03:16.162 TEST_HEADER include/spdk/env_dpdk.h 00:03:16.162 TEST_HEADER include/spdk/event.h 00:03:16.162 TEST_HEADER include/spdk/fd.h 00:03:16.162 TEST_HEADER include/spdk/fd_group.h 00:03:16.162 TEST_HEADER include/spdk/file.h 00:03:16.162 TEST_HEADER include/spdk/ftl.h 00:03:16.162 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:16.162 CC test/thread/poller_perf/poller_perf.o 00:03:16.162 CC examples/ioat/perf/perf.o 00:03:16.162 CC examples/util/zipf/zipf.o 00:03:16.162 TEST_HEADER include/spdk/gpt_spec.h 00:03:16.162 TEST_HEADER include/spdk/hexlify.h 00:03:16.162 TEST_HEADER include/spdk/histogram_data.h 00:03:16.162 TEST_HEADER include/spdk/idxd.h 00:03:16.162 TEST_HEADER include/spdk/idxd_spec.h 00:03:16.162 TEST_HEADER include/spdk/init.h 00:03:16.162 TEST_HEADER include/spdk/ioat.h 00:03:16.162 TEST_HEADER include/spdk/ioat_spec.h 00:03:16.162 TEST_HEADER include/spdk/iscsi_spec.h 00:03:16.162 TEST_HEADER include/spdk/json.h 00:03:16.162 TEST_HEADER include/spdk/jsonrpc.h 00:03:16.162 TEST_HEADER include/spdk/keyring.h 00:03:16.162 TEST_HEADER include/spdk/keyring_module.h 00:03:16.162 TEST_HEADER include/spdk/likely.h 00:03:16.162 TEST_HEADER include/spdk/log.h 00:03:16.162 TEST_HEADER include/spdk/lvol.h 00:03:16.162 TEST_HEADER include/spdk/memory.h 00:03:16.162 TEST_HEADER include/spdk/mmio.h 00:03:16.162 TEST_HEADER include/spdk/nbd.h 00:03:16.162 TEST_HEADER include/spdk/net.h 00:03:16.162 CC test/dma/test_dma/test_dma.o 00:03:16.162 CC test/app/bdev_svc/bdev_svc.o 00:03:16.162 TEST_HEADER include/spdk/notify.h 00:03:16.162 TEST_HEADER include/spdk/nvme.h 00:03:16.162 TEST_HEADER include/spdk/nvme_intel.h 00:03:16.162 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:16.162 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:16.162 TEST_HEADER include/spdk/nvme_spec.h 00:03:16.162 TEST_HEADER include/spdk/nvme_zns.h 00:03:16.162 LINK rpc_client_test 00:03:16.162 TEST_HEADER include/spdk/nvmf.h 00:03:16.162 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:16.162 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:16.162 CC test/env/mem_callbacks/mem_callbacks.o 00:03:16.162 TEST_HEADER include/spdk/nvmf_spec.h 00:03:16.162 TEST_HEADER include/spdk/nvmf_transport.h 00:03:16.162 TEST_HEADER include/spdk/opal.h 00:03:16.162 TEST_HEADER include/spdk/opal_spec.h 00:03:16.162 TEST_HEADER include/spdk/pci_ids.h 00:03:16.162 TEST_HEADER include/spdk/pipe.h 00:03:16.162 TEST_HEADER include/spdk/queue.h 00:03:16.162 TEST_HEADER include/spdk/reduce.h 00:03:16.162 TEST_HEADER include/spdk/rpc.h 00:03:16.162 TEST_HEADER include/spdk/scheduler.h 00:03:16.162 TEST_HEADER include/spdk/scsi.h 00:03:16.162 TEST_HEADER include/spdk/scsi_spec.h 00:03:16.162 TEST_HEADER include/spdk/sock.h 00:03:16.162 TEST_HEADER include/spdk/stdinc.h 00:03:16.162 TEST_HEADER include/spdk/string.h 00:03:16.162 TEST_HEADER include/spdk/thread.h 00:03:16.420 TEST_HEADER include/spdk/trace.h 00:03:16.420 TEST_HEADER include/spdk/trace_parser.h 00:03:16.420 TEST_HEADER include/spdk/tree.h 00:03:16.420 TEST_HEADER include/spdk/ublk.h 00:03:16.420 TEST_HEADER include/spdk/util.h 00:03:16.420 TEST_HEADER include/spdk/uuid.h 00:03:16.420 TEST_HEADER include/spdk/version.h 00:03:16.420 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:16.420 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:16.420 TEST_HEADER include/spdk/vhost.h 00:03:16.420 TEST_HEADER include/spdk/vmd.h 00:03:16.420 TEST_HEADER include/spdk/xor.h 00:03:16.420 TEST_HEADER include/spdk/zipf.h 00:03:16.420 CXX test/cpp_headers/accel.o 00:03:16.420 LINK poller_perf 00:03:16.421 LINK zipf 00:03:16.421 LINK interrupt_tgt 00:03:16.421 LINK spdk_trace 00:03:16.421 LINK bdev_svc 00:03:16.421 LINK ioat_perf 00:03:16.678 CXX test/cpp_headers/accel_module.o 00:03:16.678 LINK test_dma 00:03:16.678 CXX test/cpp_headers/assert.o 00:03:16.936 LINK mem_callbacks 00:03:16.936 CXX test/cpp_headers/barrier.o 00:03:16.936 CC app/trace_record/trace_record.o 00:03:17.194 CXX test/cpp_headers/base64.o 00:03:17.194 CC app/nvmf_tgt/nvmf_main.o 00:03:17.194 CC app/iscsi_tgt/iscsi_tgt.o 00:03:17.194 CXX test/cpp_headers/bdev.o 00:03:17.452 LINK spdk_trace_record 00:03:17.452 CC test/thread/lock/spdk_lock.o 00:03:17.452 LINK nvmf_tgt 00:03:17.452 CC examples/ioat/verify/verify.o 00:03:17.452 CC test/env/vtophys/vtophys.o 00:03:17.452 LINK iscsi_tgt 00:03:17.452 CXX test/cpp_headers/bdev_module.o 00:03:17.710 LINK vtophys 00:03:17.710 LINK verify 00:03:17.710 CXX test/cpp_headers/bdev_zone.o 00:03:17.969 CXX test/cpp_headers/bit_array.o 00:03:17.969 CXX test/cpp_headers/bit_pool.o 00:03:18.228 CXX test/cpp_headers/blob.o 00:03:18.228 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:18.228 CXX test/cpp_headers/blob_bdev.o 00:03:18.486 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.486 LINK histogram_ut 00:03:18.486 CXX test/cpp_headers/blobfs.o 00:03:18.486 CC examples/thread/thread/thread_ex.o 00:03:18.745 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:18.745 LINK env_dpdk_post_init 00:03:18.745 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.745 LINK thread 00:03:18.745 CXX test/cpp_headers/conf.o 00:03:19.003 CC test/unit/lib/log/log.c/log_ut.o 00:03:19.003 CXX test/cpp_headers/config.o 00:03:19.003 CXX test/cpp_headers/cpuset.o 00:03:19.003 LINK nvme_fuzz 00:03:19.271 LINK spdk_lock 00:03:19.271 LINK log_ut 00:03:19.271 CXX test/cpp_headers/crc16.o 00:03:19.271 CXX test/cpp_headers/crc32.o 00:03:19.545 CXX test/cpp_headers/crc64.o 00:03:19.804 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:19.804 CXX test/cpp_headers/dif.o 00:03:19.804 CXX test/cpp_headers/dma.o 00:03:19.804 CC test/app/histogram_perf/histogram_perf.o 00:03:20.063 CC test/env/memory/memory_ut.o 00:03:20.063 CC test/app/jsoncat/jsoncat.o 00:03:20.063 CXX test/cpp_headers/endian.o 00:03:20.063 LINK histogram_perf 00:03:20.063 LINK jsoncat 00:03:20.323 CC test/app/stub/stub.o 00:03:20.323 CXX test/cpp_headers/env.o 00:03:20.323 LINK common_ut 00:03:20.323 LINK stub 00:03:20.581 CXX test/cpp_headers/env_dpdk.o 00:03:20.581 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:20.840 CXX test/cpp_headers/event.o 00:03:20.840 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:20.840 CXX test/cpp_headers/fd.o 00:03:20.840 CXX test/cpp_headers/fd_group.o 00:03:21.099 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:21.099 LINK base64_ut 00:03:21.099 LINK memory_ut 00:03:21.099 CXX test/cpp_headers/file.o 00:03:21.357 CC test/event/event_perf/event_perf.o 00:03:21.357 CXX test/cpp_headers/ftl.o 00:03:21.357 CC test/event/reactor/reactor.o 00:03:21.357 CC test/nvme/aer/aer.o 00:03:21.357 CC test/env/pci/pci_ut.o 00:03:21.357 LINK event_perf 00:03:21.357 LINK reactor 00:03:21.616 CXX test/cpp_headers/gpt_spec.o 00:03:21.616 LINK bit_array_ut 00:03:21.616 LINK aer 00:03:21.616 CXX test/cpp_headers/hexlify.o 00:03:21.616 CC app/spdk_tgt/spdk_tgt.o 00:03:21.875 LINK pci_ut 00:03:21.875 CXX test/cpp_headers/histogram_data.o 00:03:21.875 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:22.133 LINK spdk_tgt 00:03:22.133 CXX test/cpp_headers/idxd.o 00:03:22.133 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:22.133 CXX test/cpp_headers/idxd_spec.o 00:03:22.133 LINK cpuset_ut 00:03:22.133 LINK crc16_ut 00:03:22.391 CXX test/cpp_headers/init.o 00:03:22.391 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:22.391 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:22.391 CC test/event/reactor_perf/reactor_perf.o 00:03:22.391 CXX test/cpp_headers/ioat.o 00:03:22.650 CC examples/sock/hello_world/hello_sock.o 00:03:22.650 LINK iscsi_fuzz 00:03:22.650 LINK crc32_ieee_ut 00:03:22.650 LINK crc32c_ut 00:03:22.650 LINK reactor_perf 00:03:22.650 CC examples/vmd/lsvmd/lsvmd.o 00:03:22.650 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:22.650 CXX test/cpp_headers/ioat_spec.o 00:03:22.909 LINK lsvmd 00:03:22.909 LINK crc64_ut 00:03:22.909 LINK hello_sock 00:03:22.909 CXX test/cpp_headers/iscsi_spec.o 00:03:22.909 CC examples/idxd/perf/perf.o 00:03:22.909 CC examples/vmd/led/led.o 00:03:23.168 CXX test/cpp_headers/json.o 00:03:23.168 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:23.168 LINK led 00:03:23.168 CC test/nvme/reset/reset.o 00:03:23.168 CXX test/cpp_headers/jsonrpc.o 00:03:23.427 LINK idxd_perf 00:03:23.427 CXX test/cpp_headers/keyring.o 00:03:23.427 LINK reset 00:03:23.427 CXX test/cpp_headers/keyring_module.o 00:03:23.686 CC test/event/app_repeat/app_repeat.o 00:03:23.686 CXX test/cpp_headers/likely.o 00:03:23.686 LINK app_repeat 00:03:23.945 CC test/unit/lib/util/file.c/file_ut.o 00:03:23.945 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:23.945 CXX test/cpp_headers/log.o 00:03:23.945 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:23.945 LINK file_ut 00:03:23.945 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:23.945 CXX test/cpp_headers/lvol.o 00:03:24.204 CC test/unit/lib/util/math.c/math_ut.o 00:03:24.204 CXX test/cpp_headers/memory.o 00:03:24.204 LINK dif_ut 00:03:24.204 LINK iov_ut 00:03:24.463 CC test/event/scheduler/scheduler.o 00:03:24.463 LINK math_ut 00:03:24.463 CXX test/cpp_headers/mmio.o 00:03:24.463 LINK vhost_fuzz 00:03:24.762 LINK scheduler 00:03:24.762 CXX test/cpp_headers/nbd.o 00:03:24.762 CC examples/accel/perf/accel_perf.o 00:03:24.762 CXX test/cpp_headers/net.o 00:03:24.762 CC test/unit/lib/util/net.c/net_ut.o 00:03:24.762 CC examples/blob/hello_world/hello_blob.o 00:03:24.762 CC examples/nvme/hello_world/hello_world.o 00:03:24.762 CC test/nvme/sgl/sgl.o 00:03:25.043 CXX test/cpp_headers/notify.o 00:03:25.043 LINK hello_blob 00:03:25.043 LINK net_ut 00:03:25.043 CXX test/cpp_headers/nvme.o 00:03:25.043 LINK hello_world 00:03:25.043 LINK sgl 00:03:25.303 LINK accel_perf 00:03:25.303 CXX test/cpp_headers/nvme_intel.o 00:03:25.303 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:25.303 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:25.303 CXX test/cpp_headers/nvme_ocssd.o 00:03:25.562 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:25.562 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:26.129 CXX test/cpp_headers/nvme_spec.o 00:03:26.388 CC test/unit/lib/util/string.c/string_ut.o 00:03:26.388 CC app/spdk_lspci/spdk_lspci.o 00:03:26.388 LINK ioat_ut 00:03:26.388 LINK dma_ut 00:03:26.388 LINK pipe_ut 00:03:26.388 CXX test/cpp_headers/nvme_zns.o 00:03:26.388 CC examples/nvme/reconnect/reconnect.o 00:03:26.647 LINK spdk_lspci 00:03:26.647 CC test/nvme/e2edp/nvme_dp.o 00:03:26.647 CXX test/cpp_headers/nvmf.o 00:03:26.647 CXX test/cpp_headers/nvmf_cmd.o 00:03:26.647 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:26.647 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:27.214 CXX test/cpp_headers/nvmf_spec.o 00:03:27.214 LINK reconnect 00:03:27.214 LINK nvme_dp 00:03:27.214 LINK string_ut 00:03:27.214 CC test/nvme/overhead/overhead.o 00:03:27.214 CC test/nvme/err_injection/err_injection.o 00:03:27.214 CXX test/cpp_headers/nvmf_transport.o 00:03:27.214 LINK xor_ut 00:03:27.214 CXX test/cpp_headers/opal.o 00:03:27.473 LINK err_injection 00:03:27.473 LINK overhead 00:03:27.473 CC test/nvme/startup/startup.o 00:03:27.473 CXX test/cpp_headers/opal_spec.o 00:03:27.731 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:27.731 CXX test/cpp_headers/pci_ids.o 00:03:27.731 LINK startup 00:03:27.989 CXX test/cpp_headers/pipe.o 00:03:27.989 CC app/spdk_nvme_perf/perf.o 00:03:27.989 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:27.989 CXX test/cpp_headers/queue.o 00:03:27.989 CXX test/cpp_headers/reduce.o 00:03:28.247 CC examples/blob/cli/blobcli.o 00:03:28.247 CXX test/cpp_headers/rpc.o 00:03:28.507 CXX test/cpp_headers/scheduler.o 00:03:28.507 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:28.507 LINK pci_event_ut 00:03:28.507 CXX test/cpp_headers/scsi.o 00:03:28.507 CC test/accel/dif/dif.o 00:03:28.766 CC test/nvme/reserve/reserve.o 00:03:28.766 CXX test/cpp_headers/scsi_spec.o 00:03:28.766 LINK blobcli 00:03:28.766 LINK spdk_nvme_perf 00:03:29.025 LINK reserve 00:03:29.025 CXX test/cpp_headers/sock.o 00:03:29.025 CC test/blobfs/mkfs/mkfs.o 00:03:29.025 LINK nvme_manage 00:03:29.025 CC test/lvol/esnap/esnap.o 00:03:29.025 LINK dif 00:03:29.025 CXX test/cpp_headers/stdinc.o 00:03:29.285 LINK mkfs 00:03:29.285 CXX test/cpp_headers/string.o 00:03:29.285 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:29.544 CXX test/cpp_headers/thread.o 00:03:29.544 CXX test/cpp_headers/trace.o 00:03:29.847 CXX test/cpp_headers/trace_parser.o 00:03:30.123 LINK idxd_user_ut 00:03:30.123 CXX test/cpp_headers/tree.o 00:03:30.123 CXX test/cpp_headers/ublk.o 00:03:30.123 LINK json_parse_ut 00:03:30.382 CXX test/cpp_headers/util.o 00:03:30.382 CC test/nvme/simple_copy/simple_copy.o 00:03:30.382 CC examples/nvme/arbitration/arbitration.o 00:03:30.382 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:30.382 CC app/spdk_nvme_identify/identify.o 00:03:30.382 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:30.382 CXX test/cpp_headers/uuid.o 00:03:30.641 LINK simple_copy 00:03:30.641 CXX test/cpp_headers/version.o 00:03:30.641 LINK arbitration 00:03:30.641 CXX test/cpp_headers/vfio_user_pci.o 00:03:30.898 CXX test/cpp_headers/vfio_user_spec.o 00:03:30.898 LINK json_util_ut 00:03:30.898 CXX test/cpp_headers/vhost.o 00:03:31.156 CXX test/cpp_headers/vmd.o 00:03:31.414 LINK idxd_ut 00:03:31.414 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:31.414 LINK spdk_nvme_identify 00:03:31.414 CXX test/cpp_headers/xor.o 00:03:31.672 CXX test/cpp_headers/zipf.o 00:03:31.672 CC examples/nvme/hotplug/hotplug.o 00:03:31.672 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:31.929 LINK hotplug 00:03:31.929 LINK cmb_copy 00:03:32.187 CC examples/nvme/abort/abort.o 00:03:32.187 LINK json_write_ut 00:03:32.187 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:32.187 CC test/nvme/connect_stress/connect_stress.o 00:03:32.445 LINK pmr_persistence 00:03:32.445 LINK connect_stress 00:03:32.445 CC test/nvme/boot_partition/boot_partition.o 00:03:32.703 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:32.703 LINK abort 00:03:32.703 CC test/nvme/compliance/nvme_compliance.o 00:03:32.703 CC app/spdk_nvme_discover/discovery_aer.o 00:03:32.703 LINK boot_partition 00:03:32.962 LINK spdk_nvme_discover 00:03:33.221 LINK nvme_compliance 00:03:33.221 LINK jsonrpc_server_ut 00:03:33.479 CC app/spdk_top/spdk_top.o 00:03:33.479 CC test/nvme/fused_ordering/fused_ordering.o 00:03:33.741 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:33.741 LINK fused_ordering 00:03:34.017 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:34.017 CC test/nvme/fdp/fdp.o 00:03:34.290 LINK doorbell_aers 00:03:34.290 CC app/vhost/vhost.o 00:03:34.549 CC examples/bdev/hello_world/hello_bdev.o 00:03:34.549 CC examples/bdev/bdevperf/bdevperf.o 00:03:34.549 LINK vhost 00:03:34.549 LINK fdp 00:03:34.807 CC test/nvme/cuse/cuse.o 00:03:34.807 LINK spdk_top 00:03:34.807 LINK hello_bdev 00:03:35.081 LINK rpc_ut 00:03:35.339 LINK esnap 00:03:35.339 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:35.612 CC app/spdk_dd/spdk_dd.o 00:03:35.612 CC app/fio/nvme/fio_plugin.o 00:03:35.612 LINK bdevperf 00:03:35.871 CC app/fio/bdev/fio_plugin.o 00:03:35.871 LINK spdk_dd 00:03:36.130 LINK cuse 00:03:36.388 LINK spdk_nvme 00:03:36.388 LINK spdk_bdev 00:03:36.388 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:36.388 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:36.647 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:37.214 LINK notify_ut 00:03:37.214 LINK keyring_ut 00:03:37.472 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:37.472 CC test/bdev/bdevio/bdevio.o 00:03:37.731 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:37.989 LINK bdevio 00:03:38.248 LINK sock_ut 00:03:38.248 LINK thread_ut 00:03:38.818 LINK posix_ut 00:03:38.818 LINK iobuf_ut 00:03:39.385 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:39.385 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:39.385 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:39.385 CC examples/nvmf/nvmf/nvmf.o 00:03:39.385 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:39.385 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:39.385 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:39.385 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:39.643 LINK nvmf 00:03:39.643 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:39.643 LINK rpc_ut 00:03:40.274 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:40.274 LINK subsystem_ut 00:03:40.274 LINK blob_bdev_ut 00:03:40.532 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:40.532 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:41.099 LINK nvme_ut 00:03:41.099 LINK nvme_ctrlr_cmd_ut 00:03:41.099 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:41.357 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:41.357 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:41.357 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:41.616 LINK nvme_ns_ut 00:03:41.616 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:41.874 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:41.874 LINK accel_ut 00:03:42.133 LINK nvme_quirks_ut 00:03:42.392 LINK nvme_poll_group_ut 00:03:42.392 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:42.651 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:42.651 LINK nvme_ns_cmd_ut 00:03:42.651 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:42.651 LINK nvme_ctrlr_ut 00:03:42.909 LINK nvme_qpair_ut 00:03:42.909 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:42.909 LINK nvme_ns_ocssd_cmd_ut 00:03:42.909 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:43.167 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:43.168 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:43.426 CC test/unit/lib/event/app.c/app_ut.o 00:03:43.426 LINK nvme_pcie_ut 00:03:43.724 LINK nvme_transport_ut 00:03:43.724 LINK nvme_io_msg_ut 00:03:43.724 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:43.982 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:43.982 LINK nvme_fabric_ut 00:03:43.982 LINK nvme_opal_ut 00:03:44.240 LINK app_ut 00:03:44.498 LINK nvme_pcie_common_ut 00:03:45.063 LINK reactor_ut 00:03:45.321 LINK nvme_tcp_ut 00:03:45.321 LINK nvme_cuse_ut 00:03:45.579 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:45.579 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:45.579 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:45.579 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:45.579 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:45.579 LINK nvme_rdma_ut 00:03:45.579 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:45.837 LINK scsi_nvme_ut 00:03:45.837 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:45.837 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:45.837 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:46.096 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:46.096 LINK bdev_zone_ut 00:03:46.355 LINK gpt_ut 00:03:46.355 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:46.614 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:46.872 LINK vbdev_zone_block_ut 00:03:47.130 LINK vbdev_lvol_ut 00:03:47.130 LINK bdev_raid_sb_ut 00:03:47.388 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:47.388 LINK blob_ut 00:03:47.388 LINK concat_ut 00:03:47.647 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:47.647 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:47.905 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:47.905 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:48.165 LINK tree_ut 00:03:48.165 LINK raid1_ut 00:03:48.165 LINK bdev_raid_ut 00:03:48.422 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:48.422 LINK raid0_ut 00:03:48.422 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:48.680 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:48.938 LINK raid5f_ut 00:03:48.938 LINK blobfs_bdev_ut 00:03:49.505 LINK part_ut 00:03:49.764 LINK blobfs_sync_ut 00:03:49.764 LINK blobfs_async_ut 00:03:49.764 LINK bdev_ut 00:03:49.764 LINK lvol_ut 00:03:51.140 LINK bdev_nvme_ut 00:03:51.140 LINK bdev_ut 00:03:51.709 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:51.709 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:51.709 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:51.709 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:51.709 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:03:51.709 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:51.709 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:51.709 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:51.709 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:51.709 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:51.967 LINK ftl_bitmap_ut 00:03:52.226 LINK ftl_l2p_ut 00:03:52.226 LINK dev_ut 00:03:52.485 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:52.485 LINK ftl_mempool_ut 00:03:52.743 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:52.743 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:52.743 LINK ftl_mngt_ut 00:03:52.743 LINK ftl_io_ut 00:03:52.743 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:53.002 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:53.261 LINK ftl_p2l_ut 00:03:53.520 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:53.779 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:53.779 LINK ftl_sb_ut 00:03:54.037 LINK lun_ut 00:03:54.037 LINK ftl_band_ut 00:03:54.037 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:54.296 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:54.594 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:54.594 LINK scsi_ut 00:03:54.594 LINK ftl_layout_upgrade_ut 00:03:54.860 LINK ctrlr_bdev_ut 00:03:54.860 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:54.860 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:55.118 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:55.377 LINK ctrlr_discovery_ut 00:03:55.377 LINK subsystem_ut 00:03:55.377 LINK scsi_pr_ut 00:03:55.377 LINK nvmf_ut 00:03:55.636 LINK scsi_bdev_ut 00:03:56.203 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:56.203 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:56.203 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:56.203 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:56.203 LINK ctrlr_ut 00:03:56.203 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:56.461 LINK auth_ut 00:03:56.461 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:56.719 LINK param_ut 00:03:56.719 LINK tcp_ut 00:03:56.719 LINK init_grp_ut 00:03:56.719 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:57.655 LINK conn_ut 00:03:57.655 LINK portal_grp_ut 00:03:58.222 LINK tgt_node_ut 00:03:58.481 LINK rdma_ut 00:03:58.481 LINK transport_ut 00:03:58.481 LINK vhost_ut 00:03:58.739 LINK iscsi_ut 00:03:58.997 00:03:58.997 real 2m15.238s 00:03:58.997 user 11m6.724s 00:03:58.997 sys 2m35.983s 00:03:58.997 00:30:21 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:58.997 ************************************ 00:03:58.997 END TEST unittest_build 00:03:58.997 ************************************ 00:03:58.997 00:30:21 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:59.256 00:30:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:59.256 00:30:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:59.256 00:30:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:59.256 00:30:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.256 00:30:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:59.256 00:30:21 -- pm/common@44 -- $ pid=2189 00:03:59.256 00:30:21 -- pm/common@50 -- $ kill -TERM 2189 00:03:59.256 00:30:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.256 00:30:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:59.256 00:30:21 -- pm/common@44 -- $ pid=2191 00:03:59.256 00:30:21 -- pm/common@50 -- $ kill -TERM 2191 00:03:59.256 00:30:21 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:59.256 00:30:21 -- nvmf/common.sh@7 -- # uname -s 00:03:59.256 00:30:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.256 00:30:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.256 00:30:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.256 00:30:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.256 00:30:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.256 00:30:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.256 00:30:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.256 00:30:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.256 00:30:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.256 00:30:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.256 00:30:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:31d8b035-012e-4516-85b0-7d1485d07f76 00:03:59.256 00:30:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=31d8b035-012e-4516-85b0-7d1485d07f76 00:03:59.256 00:30:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.256 00:30:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.256 00:30:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:59.256 00:30:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:59.256 00:30:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:59.256 00:30:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.256 00:30:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.256 00:30:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.256 00:30:21 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:59.256 00:30:21 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:59.256 00:30:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:59.256 00:30:21 -- paths/export.sh@5 -- # export PATH 00:03:59.256 00:30:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:59.256 00:30:21 -- nvmf/common.sh@47 -- # : 0 00:03:59.256 00:30:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:59.256 00:30:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:59.256 00:30:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:59.256 00:30:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.256 00:30:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.256 00:30:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:59.256 00:30:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:59.256 00:30:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:59.256 00:30:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:59.256 00:30:21 -- spdk/autotest.sh@32 -- # uname -s 00:03:59.256 00:30:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:59.256 00:30:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:59.256 00:30:21 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:59.256 00:30:21 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:59.256 00:30:21 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:59.256 00:30:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:59.256 00:30:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:59.256 00:30:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:59.256 00:30:21 -- spdk/autotest.sh@48 -- # udevadm_pid=100270 00:03:59.256 00:30:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:59.256 00:30:21 -- pm/common@17 -- # local monitor 00:03:59.256 00:30:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.256 00:30:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.256 00:30:21 -- pm/common@21 -- # date +%s 00:03:59.256 00:30:21 -- pm/common@25 -- # sleep 1 00:03:59.256 00:30:21 -- pm/common@21 -- # date +%s 00:03:59.256 00:30:21 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:59.256 00:30:21 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721867421 00:03:59.256 00:30:21 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721867421 00:03:59.256 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721867421_collect-vmstat.pm.log 00:03:59.256 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721867421_collect-cpu-load.pm.log 00:04:00.631 00:30:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:00.631 00:30:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:00.631 00:30:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:00.631 00:30:22 -- common/autotest_common.sh@10 -- # set +x 00:04:00.631 00:30:22 -- spdk/autotest.sh@59 -- # create_test_list 00:04:00.631 00:30:22 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:00.631 00:30:22 -- common/autotest_common.sh@10 -- # set +x 00:04:00.631 00:30:22 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:00.631 00:30:22 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:00.631 00:30:22 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:00.631 00:30:22 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:00.631 00:30:22 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:00.631 00:30:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:00.631 00:30:22 -- common/autotest_common.sh@1453 -- # uname 00:04:00.631 00:30:22 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:04:00.631 00:30:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:00.631 00:30:22 -- common/autotest_common.sh@1473 -- # uname 00:04:00.631 00:30:22 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:04:00.631 00:30:22 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:00.631 00:30:22 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:00.631 00:30:22 -- spdk/autotest.sh@72 -- # hash lcov 00:04:00.631 00:30:22 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:00.631 00:30:22 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:00.631 --rc lcov_branch_coverage=1 00:04:00.631 --rc lcov_function_coverage=1 00:04:00.631 --rc genhtml_branch_coverage=1 00:04:00.631 --rc genhtml_function_coverage=1 00:04:00.631 --rc genhtml_legend=1 00:04:00.631 --rc geninfo_all_blocks=1 00:04:00.631 ' 00:04:00.631 00:30:22 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:00.631 --rc lcov_branch_coverage=1 00:04:00.631 --rc lcov_function_coverage=1 00:04:00.631 --rc genhtml_branch_coverage=1 00:04:00.631 --rc genhtml_function_coverage=1 00:04:00.631 --rc genhtml_legend=1 00:04:00.631 --rc geninfo_all_blocks=1 00:04:00.631 ' 00:04:00.631 00:30:22 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:00.631 --rc lcov_branch_coverage=1 00:04:00.631 --rc lcov_function_coverage=1 00:04:00.631 --rc genhtml_branch_coverage=1 00:04:00.631 --rc genhtml_function_coverage=1 00:04:00.631 --rc genhtml_legend=1 00:04:00.631 --rc geninfo_all_blocks=1 00:04:00.631 --no-external' 00:04:00.631 00:30:22 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:00.631 --rc lcov_branch_coverage=1 00:04:00.631 --rc lcov_function_coverage=1 00:04:00.631 --rc genhtml_branch_coverage=1 00:04:00.631 --rc genhtml_function_coverage=1 00:04:00.631 --rc genhtml_legend=1 00:04:00.631 --rc geninfo_all_blocks=1 00:04:00.631 --no-external' 00:04:00.631 00:30:22 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:00.631 lcov: LCOV version 1.15 00:04:00.631 00:30:23 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:05.898 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:05.898 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:52.730 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:52.730 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:52.731 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:52.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:52.731 00:31:13 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:52.731 00:31:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.731 00:31:13 -- common/autotest_common.sh@10 -- # set +x 00:04:52.731 00:31:13 -- spdk/autotest.sh@91 -- # rm -f 00:04:52.731 00:31:13 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:52.731 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:52.731 00:31:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:52.731 00:31:14 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:52.731 00:31:14 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:52.731 00:31:14 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:52.731 00:31:14 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:52.731 00:31:14 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:52.731 00:31:14 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:52.731 00:31:14 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.731 00:31:14 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:52.731 00:31:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:52.732 00:31:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:52.732 00:31:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:52.732 00:31:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:52.732 00:31:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:52.732 00:31:14 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:52.732 No valid GPT data, bailing 00:04:52.732 00:31:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:52.732 00:31:14 -- scripts/common.sh@391 -- # pt= 00:04:52.732 00:31:14 -- scripts/common.sh@392 -- # return 1 00:04:52.732 00:31:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:52.732 1+0 records in 00:04:52.732 1+0 records out 00:04:52.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522577 s, 201 MB/s 00:04:52.732 00:31:14 -- spdk/autotest.sh@118 -- # sync 00:04:52.732 00:31:14 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:52.732 00:31:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:52.732 00:31:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:53.666 00:31:16 -- spdk/autotest.sh@124 -- # uname -s 00:04:53.666 00:31:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:53.666 00:31:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:53.666 00:31:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.666 00:31:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.666 00:31:16 -- common/autotest_common.sh@10 -- # set +x 00:04:53.666 ************************************ 00:04:53.666 START TEST setup.sh 00:04:53.666 ************************************ 00:04:53.666 00:31:16 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:53.666 * Looking for test storage... 00:04:53.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:53.666 00:31:16 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:53.666 00:31:16 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:53.666 00:31:16 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:53.666 00:31:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.666 00:31:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.667 00:31:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.667 ************************************ 00:04:53.667 START TEST acl 00:04:53.667 ************************************ 00:04:53.667 00:31:16 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:53.924 * Looking for test storage... 00:04:53.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:53.924 00:31:16 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:53.924 00:31:16 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:53.924 00:31:16 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:53.924 00:31:16 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:53.924 00:31:16 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:53.924 00:31:16 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:53.924 00:31:16 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:53.924 00:31:16 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:53.924 00:31:16 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:53.924 00:31:16 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:53.924 00:31:16 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:53.924 00:31:16 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:53.924 00:31:16 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:53.924 00:31:16 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:53.924 00:31:16 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.924 00:31:16 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:54.491 00:31:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:54.491 00:31:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:54.491 00:31:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:54.491 00:31:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:54.491 00:31:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.491 00:31:16 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:54.749 00:31:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.007 Hugepages 00:04:55.007 node hugesize free / total 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.007 00:04:55.007 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.007 00:31:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.265 00:31:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:55.265 00:31:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:55.265 00:31:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:55.265 00:31:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:55.265 00:31:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:55.265 00:31:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.265 00:31:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:55.266 00:31:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:55.266 00:31:17 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.266 00:31:17 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.266 00:31:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:55.266 ************************************ 00:04:55.266 START TEST denied 00:04:55.266 ************************************ 00:04:55.266 00:31:17 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:55.266 00:31:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:55.266 00:31:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:55.266 00:31:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:55.266 00:31:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.266 00:31:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:57.201 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:57.201 00:31:19 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:57.201 00:31:19 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:57.201 00:31:19 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:57.201 00:31:19 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:57.201 00:31:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:57.201 00:31:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:57.201 00:31:19 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:57.201 00:31:19 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:57.201 00:31:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.201 00:31:19 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.768 00:04:57.768 real 0m2.481s 00:04:57.768 user 0m0.541s 00:04:57.768 sys 0m2.011s 00:04:57.768 00:31:20 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.768 ************************************ 00:04:57.768 END TEST denied 00:04:57.768 ************************************ 00:04:57.768 00:31:20 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:57.768 00:31:20 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:57.768 00:31:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.768 00:31:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.768 00:31:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:57.768 ************************************ 00:04:57.768 START TEST allowed 00:04:57.768 ************************************ 00:04:57.768 00:31:20 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:57.768 00:31:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:57.768 00:31:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:57.768 00:31:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.768 00:31:20 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:57.768 00:31:20 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:59.672 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.673 00:31:21 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:59.673 00:31:21 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:59.673 00:31:21 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:59.673 00:31:21 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.673 00:31:21 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:59.930 00:04:59.930 real 0m2.160s 00:04:59.930 user 0m0.480s 00:04:59.930 sys 0m1.682s 00:04:59.930 ************************************ 00:04:59.930 00:31:22 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.930 00:31:22 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:59.930 END TEST allowed 00:04:59.930 ************************************ 00:04:59.930 00:04:59.930 real 0m6.209s 00:04:59.930 user 0m1.714s 00:04:59.930 sys 0m4.648s 00:04:59.930 00:31:22 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.930 ************************************ 00:04:59.930 END TEST acl 00:04:59.930 00:31:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:59.930 ************************************ 00:04:59.930 00:31:22 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:59.930 00:31:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.930 00:31:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.930 00:31:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:59.930 ************************************ 00:04:59.930 START TEST hugepages 00:04:59.930 ************************************ 00:04:59.930 00:31:22 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:00.188 * Looking for test storage... 00:05:00.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 2620748 kB' 'MemAvailable: 7391788 kB' 'Buffers: 36064 kB' 'Cached: 4863896 kB' 'SwapCached: 0 kB' 'Active: 1035988 kB' 'Inactive: 3983512 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 130388 kB' 'Active(file): 1034912 kB' 'Inactive(file): 3853124 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 148980 kB' 'Mapped: 67808 kB' 'Shmem: 2596 kB' 'KReclaimable: 204308 kB' 'Slab: 270328 kB' 'SReclaimable: 204308 kB' 'SUnreclaim: 66020 kB' 'KernelStack: 4512 kB' 'PageTables: 3568 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 504296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.188 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:00.189 00:31:22 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:00.189 00:31:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.189 00:31:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.189 00:31:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.189 ************************************ 00:05:00.189 START TEST default_setup 00:05:00.189 ************************************ 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.189 00:31:22 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:00.754 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4702488 kB' 'MemAvailable: 9473720 kB' 'Buffers: 36064 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036020 kB' 'Inactive: 3999004 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 145768 kB' 'Active(file): 1034972 kB' 'Inactive(file): 3853236 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 476 kB' 'Writeback: 0 kB' 'AnonPages: 164336 kB' 'Mapped: 67708 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270196 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65868 kB' 'KernelStack: 4352 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.325 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4702488 kB' 'MemAvailable: 9473720 kB' 'Buffers: 36064 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036020 kB' 'Inactive: 3999264 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146028 kB' 'Active(file): 1034972 kB' 'Inactive(file): 3853236 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 476 kB' 'Writeback: 0 kB' 'AnonPages: 164596 kB' 'Mapped: 67708 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270196 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65868 kB' 'KernelStack: 4352 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.326 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.327 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4702744 kB' 'MemAvailable: 9473976 kB' 'Buffers: 36064 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036020 kB' 'Inactive: 3998888 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 145652 kB' 'Active(file): 1034972 kB' 'Inactive(file): 3853236 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'AnonPages: 164260 kB' 'Mapped: 67708 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270196 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65868 kB' 'KernelStack: 4352 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.328 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.329 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:01.330 nr_hugepages=1024 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.330 resv_hugepages=0 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.330 surplus_hugepages=0 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.330 anon_hugepages=0 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4702492 kB' 'MemAvailable: 9473724 kB' 'Buffers: 36064 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036012 kB' 'Inactive: 3998724 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145488 kB' 'Active(file): 1034972 kB' 'Inactive(file): 3853236 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'AnonPages: 164100 kB' 'Mapped: 67700 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270196 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65868 kB' 'KernelStack: 4384 kB' 'PageTables: 3508 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.330 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:01.331 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4702492 kB' 'MemUsed: 7540484 kB' 'SwapCached: 0 kB' 'Active: 1036012 kB' 'Inactive: 3998724 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145488 kB' 'Active(file): 1034972 kB' 'Inactive(file): 3853236 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'FilePages: 4899868 kB' 'Mapped: 67700 kB' 'AnonPages: 164100 kB' 'Shmem: 2596 kB' 'KernelStack: 4452 kB' 'PageTables: 3768 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204328 kB' 'Slab: 270196 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.332 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.333 node0=1024 expecting 1024 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.333 00:05:01.333 real 0m1.213s 00:05:01.333 user 0m0.329s 00:05:01.333 sys 0m0.895s 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.333 00:31:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:01.333 ************************************ 00:05:01.333 END TEST default_setup 00:05:01.333 ************************************ 00:05:01.333 00:31:23 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:01.333 00:31:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.333 00:31:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.333 00:31:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:01.333 ************************************ 00:05:01.333 START TEST per_node_1G_alloc 00:05:01.333 ************************************ 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.333 00:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:01.900 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.163 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5750444 kB' 'MemAvailable: 10521676 kB' 'Buffers: 36072 kB' 'Cached: 4863796 kB' 'SwapCached: 0 kB' 'Active: 1036036 kB' 'Inactive: 3999164 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 145944 kB' 'Active(file): 1034988 kB' 'Inactive(file): 3853220 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 492 kB' 'Writeback: 0 kB' 'AnonPages: 164912 kB' 'Mapped: 68016 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270460 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 66132 kB' 'KernelStack: 4588 kB' 'PageTables: 3524 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.164 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5750872 kB' 'MemAvailable: 10522104 kB' 'Buffers: 36072 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036040 kB' 'Inactive: 3998904 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 145684 kB' 'Active(file): 1034988 kB' 'Inactive(file): 3853220 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 540 kB' 'Writeback: 0 kB' 'AnonPages: 164324 kB' 'Mapped: 67756 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270236 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65908 kB' 'KernelStack: 4460 kB' 'PageTables: 3536 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.165 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.166 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5750900 kB' 'MemAvailable: 10522132 kB' 'Buffers: 36072 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036032 kB' 'Inactive: 3998664 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145444 kB' 'Active(file): 1034988 kB' 'Inactive(file): 3853220 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 540 kB' 'Writeback: 0 kB' 'AnonPages: 164040 kB' 'Mapped: 67744 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270260 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65932 kB' 'KernelStack: 4416 kB' 'PageTables: 3572 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.167 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.168 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.169 nr_hugepages=512 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:02.169 resv_hugepages=0 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.169 surplus_hugepages=0 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.169 anon_hugepages=0 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5750900 kB' 'MemAvailable: 10522132 kB' 'Buffers: 36072 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036032 kB' 'Inactive: 3998876 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145656 kB' 'Active(file): 1034988 kB' 'Inactive(file): 3853220 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 540 kB' 'Writeback: 0 kB' 'AnonPages: 164252 kB' 'Mapped: 67744 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270260 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65932 kB' 'KernelStack: 4384 kB' 'PageTables: 3484 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.169 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.170 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5750648 kB' 'MemUsed: 6492328 kB' 'SwapCached: 0 kB' 'Active: 1036028 kB' 'Inactive: 3998608 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145388 kB' 'Active(file): 1034988 kB' 'Inactive(file): 3853220 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 540 kB' 'Writeback: 0 kB' 'FilePages: 4899876 kB' 'Mapped: 67704 kB' 'AnonPages: 164000 kB' 'Shmem: 2596 kB' 'KernelStack: 4340 kB' 'PageTables: 3456 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204328 kB' 'Slab: 270332 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 66004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.171 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.172 node0=512 expecting 512 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:02.172 00:05:02.172 real 0m0.803s 00:05:02.172 user 0m0.319s 00:05:02.172 sys 0m0.527s 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.172 00:31:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.172 ************************************ 00:05:02.172 END TEST per_node_1G_alloc 00:05:02.172 ************************************ 00:05:02.172 00:31:24 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:02.172 00:31:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.172 00:31:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.431 00:31:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.431 ************************************ 00:05:02.431 START TEST even_2G_alloc 00:05:02.431 ************************************ 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.431 00:31:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:02.689 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.260 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4705032 kB' 'MemAvailable: 9476272 kB' 'Buffers: 36072 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036036 kB' 'Inactive: 3998868 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145648 kB' 'Active(file): 1034996 kB' 'Inactive(file): 3853220 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 164268 kB' 'Mapped: 67824 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270300 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65972 kB' 'KernelStack: 4340 kB' 'PageTables: 3392 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.261 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4705296 kB' 'MemAvailable: 9476536 kB' 'Buffers: 36072 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036040 kB' 'Inactive: 3998776 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145556 kB' 'Active(file): 1034996 kB' 'Inactive(file): 3853220 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 164152 kB' 'Mapped: 67824 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270300 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65972 kB' 'KernelStack: 4308 kB' 'PageTables: 3304 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.262 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4705536 kB' 'MemAvailable: 9476776 kB' 'Buffers: 36072 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036040 kB' 'Inactive: 3998656 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145436 kB' 'Active(file): 1034996 kB' 'Inactive(file): 3853220 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 164032 kB' 'Mapped: 67824 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270300 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65972 kB' 'KernelStack: 4260 kB' 'PageTables: 3180 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.263 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.264 nr_hugepages=1024 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.264 resv_hugepages=0 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.264 surplus_hugepages=0 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.264 anon_hugepages=0 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4705752 kB' 'MemAvailable: 9476992 kB' 'Buffers: 36072 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036040 kB' 'Inactive: 3998840 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145620 kB' 'Active(file): 1034996 kB' 'Inactive(file): 3853220 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 164232 kB' 'Mapped: 67784 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270212 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65884 kB' 'KernelStack: 4292 kB' 'PageTables: 3412 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 519024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.264 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4706272 kB' 'MemUsed: 7536704 kB' 'SwapCached: 0 kB' 'Active: 1036040 kB' 'Inactive: 3998840 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145620 kB' 'Active(file): 1034996 kB' 'Inactive(file): 3853220 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'FilePages: 4899876 kB' 'Mapped: 67784 kB' 'AnonPages: 164492 kB' 'Shmem: 2596 kB' 'KernelStack: 4360 kB' 'PageTables: 3412 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204328 kB' 'Slab: 270212 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.265 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.266 node0=1024 expecting 1024 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:03.266 00:05:03.266 real 0m0.961s 00:05:03.266 user 0m0.274s 00:05:03.266 sys 0m0.732s 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.266 00:31:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.266 ************************************ 00:05:03.266 END TEST even_2G_alloc 00:05:03.266 ************************************ 00:05:03.266 00:31:25 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:03.266 00:31:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.266 00:31:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.266 00:31:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.266 ************************************ 00:05:03.266 START TEST odd_alloc 00:05:03.266 ************************************ 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.266 00:31:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:03.833 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.094 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4706048 kB' 'MemAvailable: 9477288 kB' 'Buffers: 36072 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036064 kB' 'Inactive: 3995292 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142084 kB' 'Active(file): 1035008 kB' 'Inactive(file): 3853208 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 344 kB' 'Writeback: 4 kB' 'AnonPages: 160784 kB' 'Mapped: 67404 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270404 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 66076 kB' 'KernelStack: 4372 kB' 'PageTables: 3472 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 507904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.095 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4706404 kB' 'MemAvailable: 9477644 kB' 'Buffers: 36072 kB' 'Cached: 4863804 kB' 'SwapCached: 0 kB' 'Active: 1036060 kB' 'Inactive: 3995064 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141856 kB' 'Active(file): 1035008 kB' 'Inactive(file): 3853208 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 160504 kB' 'Mapped: 67380 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270276 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65948 kB' 'KernelStack: 4252 kB' 'PageTables: 3040 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 507904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.096 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.097 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4706404 kB' 'MemAvailable: 9477648 kB' 'Buffers: 36072 kB' 'Cached: 4863808 kB' 'SwapCached: 0 kB' 'Active: 1036060 kB' 'Inactive: 3995352 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142140 kB' 'Active(file): 1035008 kB' 'Inactive(file): 3853212 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 160804 kB' 'Mapped: 67380 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270276 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65948 kB' 'KernelStack: 4264 kB' 'PageTables: 3260 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 507904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.098 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.099 nr_hugepages=1025 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:04.099 resv_hugepages=0 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.099 surplus_hugepages=0 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.099 anon_hugepages=0 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.099 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4706404 kB' 'MemAvailable: 9477648 kB' 'Buffers: 36072 kB' 'Cached: 4863808 kB' 'SwapCached: 0 kB' 'Active: 1036052 kB' 'Inactive: 3994996 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141784 kB' 'Active(file): 1035008 kB' 'Inactive(file): 3853212 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 160408 kB' 'Mapped: 67232 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270364 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 66036 kB' 'KernelStack: 4324 kB' 'PageTables: 3484 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 507904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.360 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4706152 kB' 'MemUsed: 7536824 kB' 'SwapCached: 0 kB' 'Active: 1036052 kB' 'Inactive: 3995204 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141992 kB' 'Active(file): 1035008 kB' 'Inactive(file): 3853212 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'FilePages: 4899880 kB' 'Mapped: 67232 kB' 'AnonPages: 160616 kB' 'Shmem: 2596 kB' 'KernelStack: 4376 kB' 'PageTables: 3440 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204328 kB' 'Slab: 270364 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 66036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.362 node0=1025 expecting 1025 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:04.362 00:05:04.362 real 0m0.935s 00:05:04.362 user 0m0.290s 00:05:04.362 sys 0m0.690s 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.362 ************************************ 00:05:04.362 00:31:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:04.362 END TEST odd_alloc 00:05:04.362 ************************************ 00:05:04.362 00:31:26 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:04.362 00:31:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.362 00:31:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.362 00:31:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:04.362 ************************************ 00:05:04.362 START TEST custom_alloc 00:05:04.362 ************************************ 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.362 00:31:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:04.620 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5759040 kB' 'MemAvailable: 10530288 kB' 'Buffers: 36072 kB' 'Cached: 4863812 kB' 'SwapCached: 0 kB' 'Active: 1036064 kB' 'Inactive: 3995236 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142032 kB' 'Active(file): 1035020 kB' 'Inactive(file): 3853204 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 160880 kB' 'Mapped: 67320 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270124 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65796 kB' 'KernelStack: 4380 kB' 'PageTables: 3924 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 507904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5759560 kB' 'MemAvailable: 10530808 kB' 'Buffers: 36072 kB' 'Cached: 4863812 kB' 'SwapCached: 0 kB' 'Active: 1036064 kB' 'Inactive: 3994940 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141736 kB' 'Active(file): 1035020 kB' 'Inactive(file): 3853204 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 160440 kB' 'Mapped: 67280 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270236 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65908 kB' 'KernelStack: 4392 kB' 'PageTables: 3928 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 507904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5759796 kB' 'MemAvailable: 10531044 kB' 'Buffers: 36072 kB' 'Cached: 4863812 kB' 'SwapCached: 0 kB' 'Active: 1036064 kB' 'Inactive: 3994928 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141724 kB' 'Active(file): 1035020 kB' 'Inactive(file): 3853204 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 160428 kB' 'Mapped: 67280 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270236 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65908 kB' 'KernelStack: 4376 kB' 'PageTables: 3884 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 507904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.885 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.886 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.887 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.148 nr_hugepages=512 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:05.148 resv_hugepages=0 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.148 surplus_hugepages=0 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.148 anon_hugepages=0 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.148 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5759796 kB' 'MemAvailable: 10531040 kB' 'Buffers: 36072 kB' 'Cached: 4863808 kB' 'SwapCached: 0 kB' 'Active: 1036064 kB' 'Inactive: 3994868 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141668 kB' 'Active(file): 1035020 kB' 'Inactive(file): 3853200 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 160628 kB' 'Mapped: 67280 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270244 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65916 kB' 'KernelStack: 4292 kB' 'PageTables: 3728 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 508660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.149 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5759544 kB' 'MemUsed: 6483432 kB' 'SwapCached: 0 kB' 'Active: 1036064 kB' 'Inactive: 3994988 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141788 kB' 'Active(file): 1035020 kB' 'Inactive(file): 3853200 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'FilePages: 4899880 kB' 'Mapped: 67280 kB' 'AnonPages: 160560 kB' 'Shmem: 2596 kB' 'KernelStack: 4376 kB' 'PageTables: 3520 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204328 kB' 'Slab: 270244 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.150 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.151 00:31:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:05.152 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.152 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.152 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.152 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.152 node0=512 expecting 512 00:05:05.152 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:05.152 00:31:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:05.152 00:05:05.152 real 0m0.742s 00:05:05.152 user 0m0.272s 00:05:05.152 sys 0m0.517s 00:05:05.152 00:31:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.152 00:31:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:05.152 ************************************ 00:05:05.152 END TEST custom_alloc 00:05:05.152 ************************************ 00:05:05.152 00:31:27 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:05.152 00:31:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.152 00:31:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.152 00:31:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.152 ************************************ 00:05:05.152 START TEST no_shrink_alloc 00:05:05.152 ************************************ 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.152 00:31:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:05.410 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4708816 kB' 'MemAvailable: 9480060 kB' 'Buffers: 36072 kB' 'Cached: 4863808 kB' 'SwapCached: 0 kB' 'Active: 1036068 kB' 'Inactive: 3994892 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141692 kB' 'Active(file): 1035020 kB' 'Inactive(file): 3853200 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 160608 kB' 'Mapped: 67136 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270396 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 66068 kB' 'KernelStack: 4292 kB' 'PageTables: 3112 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 508032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.983 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4708816 kB' 'MemAvailable: 9480060 kB' 'Buffers: 36072 kB' 'Cached: 4863808 kB' 'SwapCached: 0 kB' 'Active: 1036068 kB' 'Inactive: 3995292 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142092 kB' 'Active(file): 1035020 kB' 'Inactive(file): 3853200 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 160756 kB' 'Mapped: 67136 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270396 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 66068 kB' 'KernelStack: 4340 kB' 'PageTables: 3244 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 508032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.984 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.985 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4709336 kB' 'MemAvailable: 9480580 kB' 'Buffers: 36072 kB' 'Cached: 4863808 kB' 'SwapCached: 0 kB' 'Active: 1036068 kB' 'Inactive: 3995292 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142092 kB' 'Active(file): 1035020 kB' 'Inactive(file): 3853200 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 160496 kB' 'Mapped: 67136 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270396 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 66068 kB' 'KernelStack: 4340 kB' 'PageTables: 3244 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 510384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.986 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.987 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.988 nr_hugepages=1024 00:05:05.988 resv_hugepages=0 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.988 surplus_hugepages=0 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.988 anon_hugepages=0 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4709336 kB' 'MemAvailable: 9480580 kB' 'Buffers: 36072 kB' 'Cached: 4863808 kB' 'SwapCached: 0 kB' 'Active: 1036068 kB' 'Inactive: 3995292 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142092 kB' 'Active(file): 1035020 kB' 'Inactive(file): 3853200 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 160756 kB' 'Mapped: 67136 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270396 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 66068 kB' 'KernelStack: 4408 kB' 'PageTables: 3504 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 508032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.988 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.989 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.990 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.249 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.249 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.249 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.249 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4709096 kB' 'MemUsed: 7533880 kB' 'SwapCached: 0 kB' 'Active: 1036068 kB' 'Inactive: 3995292 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142092 kB' 'Active(file): 1035020 kB' 'Inactive(file): 3853200 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'FilePages: 4899880 kB' 'Mapped: 67136 kB' 'AnonPages: 160756 kB' 'Shmem: 2596 kB' 'KernelStack: 4408 kB' 'PageTables: 3504 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204328 kB' 'Slab: 270396 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 66068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.250 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.251 node0=1024 expecting 1024 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.251 00:31:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:06.512 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.512 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4710468 kB' 'MemAvailable: 9481712 kB' 'Buffers: 36072 kB' 'Cached: 4863808 kB' 'SwapCached: 0 kB' 'Active: 1036068 kB' 'Inactive: 3995060 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141864 kB' 'Active(file): 1035024 kB' 'Inactive(file): 3853196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 160620 kB' 'Mapped: 67324 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270196 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65868 kB' 'KernelStack: 4324 kB' 'PageTables: 3536 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 508032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.512 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.513 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4710720 kB' 'MemAvailable: 9481964 kB' 'Buffers: 36072 kB' 'Cached: 4863808 kB' 'SwapCached: 0 kB' 'Active: 1036072 kB' 'Inactive: 3995140 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141944 kB' 'Active(file): 1035024 kB' 'Inactive(file): 3853196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 160488 kB' 'Mapped: 67372 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270204 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65876 kB' 'KernelStack: 4292 kB' 'PageTables: 3364 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 508032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.514 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.515 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4710960 kB' 'MemAvailable: 9482204 kB' 'Buffers: 36072 kB' 'Cached: 4863808 kB' 'SwapCached: 0 kB' 'Active: 1036072 kB' 'Inactive: 3994640 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141444 kB' 'Active(file): 1035024 kB' 'Inactive(file): 3853196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 160196 kB' 'Mapped: 67328 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270204 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65876 kB' 'KernelStack: 4252 kB' 'PageTables: 3180 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 508032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.516 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.517 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.518 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:06.779 nr_hugepages=1024 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:06.779 resv_hugepages=0 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.779 surplus_hugepages=0 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.779 anon_hugepages=0 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4710960 kB' 'MemAvailable: 9482204 kB' 'Buffers: 36072 kB' 'Cached: 4863808 kB' 'SwapCached: 0 kB' 'Active: 1036072 kB' 'Inactive: 3995108 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141912 kB' 'Active(file): 1035024 kB' 'Inactive(file): 3853196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 160624 kB' 'Mapped: 67072 kB' 'Shmem: 2596 kB' 'KReclaimable: 204328 kB' 'Slab: 270108 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65780 kB' 'KernelStack: 4300 kB' 'PageTables: 3396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 508032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 139116 kB' 'DirectMap2M: 4055040 kB' 'DirectMap1G: 10485760 kB' 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.779 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.780 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4710708 kB' 'MemUsed: 7532268 kB' 'SwapCached: 0 kB' 'Active: 1036064 kB' 'Inactive: 3994728 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141532 kB' 'Active(file): 1035024 kB' 'Inactive(file): 3853196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'FilePages: 4899880 kB' 'Mapped: 67060 kB' 'AnonPages: 160204 kB' 'Shmem: 2596 kB' 'KernelStack: 4348 kB' 'PageTables: 3240 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204328 kB' 'Slab: 270172 kB' 'SReclaimable: 204328 kB' 'SUnreclaim: 65844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.781 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.782 node0=1024 expecting 1024 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:06.782 00:05:06.782 real 0m1.588s 00:05:06.782 user 0m0.611s 00:05:06.782 sys 0m1.079s 00:05:06.782 00:31:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.783 00:31:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:06.783 ************************************ 00:05:06.783 END TEST no_shrink_alloc 00:05:06.783 ************************************ 00:05:06.783 00:31:29 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:06.783 00:31:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:06.783 00:31:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:06.783 00:31:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:06.783 00:31:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:06.783 00:31:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:06.783 00:31:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:06.783 00:31:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:06.783 00:31:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:06.783 ************************************ 00:05:06.783 END TEST hugepages 00:05:06.783 ************************************ 00:05:06.783 00:05:06.783 real 0m6.734s 00:05:06.783 user 0m2.373s 00:05:06.783 sys 0m4.665s 00:05:06.783 00:31:29 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.783 00:31:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:06.783 00:31:29 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:06.783 00:31:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.783 00:31:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.783 00:31:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:06.783 ************************************ 00:05:06.783 START TEST driver 00:05:06.783 ************************************ 00:05:06.783 00:31:29 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:06.783 * Looking for test storage... 00:05:06.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:06.783 00:31:29 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:06.783 00:31:29 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.783 00:31:29 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.351 00:31:29 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:07.351 00:31:29 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.351 00:31:29 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.351 00:31:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:07.351 ************************************ 00:05:07.351 START TEST guess_driver 00:05:07.351 ************************************ 00:05:07.351 00:31:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:07.351 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:07.351 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:07.351 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:07.351 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:07.351 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:07.351 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:07.351 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:07.610 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:07.610 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:07.610 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:07.610 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:07.610 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:07.610 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:07.610 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:07.610 00:31:29 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:07.610 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:07.610 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:07.610 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:05:07.610 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:07.610 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:07.610 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:07.610 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:07.610 Looking for driver=uio_pci_generic 00:05:07.610 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:07.610 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:07.610 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:07.610 00:31:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.610 00:31:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:07.868 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:07.868 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:07.868 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.127 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.127 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:08.127 00:31:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.063 00:31:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:09.063 00:31:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:09.063 00:31:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.063 00:31:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.631 00:05:09.631 real 0m2.028s 00:05:09.631 user 0m0.431s 00:05:09.631 sys 0m1.610s 00:05:09.631 00:31:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.631 ************************************ 00:05:09.631 END TEST guess_driver 00:05:09.631 ************************************ 00:05:09.631 00:31:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:09.631 00:05:09.631 real 0m2.751s 00:05:09.631 user 0m0.725s 00:05:09.631 sys 0m2.049s 00:05:09.631 00:31:32 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.631 00:31:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:09.631 ************************************ 00:05:09.631 END TEST driver 00:05:09.631 ************************************ 00:05:09.631 00:31:32 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:09.631 00:31:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.631 00:31:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.631 00:31:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:09.631 ************************************ 00:05:09.631 START TEST devices 00:05:09.631 ************************************ 00:05:09.631 00:31:32 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:09.631 * Looking for test storage... 00:05:09.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:09.631 00:31:32 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:09.631 00:31:32 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:09.631 00:31:32 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.631 00:31:32 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:10.201 00:31:32 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:05:10.201 00:31:32 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:05:10.201 00:31:32 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:05:10.201 00:31:32 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:10.201 00:31:32 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:10.201 00:31:32 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:05:10.201 00:31:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.201 00:31:32 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:10.201 00:31:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:10.201 00:31:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:10.201 No valid GPT data, bailing 00:05:10.201 00:31:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:10.201 00:31:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:10.201 00:31:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:10.201 00:31:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:10.201 00:31:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:10.201 00:31:32 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:10.201 00:31:32 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:10.201 00:31:32 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.201 00:31:32 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.201 00:31:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:10.201 ************************************ 00:05:10.201 START TEST nvme_mount 00:05:10.201 ************************************ 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.201 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:10.202 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:10.202 00:31:32 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:11.582 Creating new GPT entries in memory. 00:05:11.582 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:11.582 other utilities. 00:05:11.582 00:31:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:11.582 00:31:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.582 00:31:33 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:11.582 00:31:33 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:11.582 00:31:33 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:12.519 Creating new GPT entries in memory. 00:05:12.519 The operation has completed successfully. 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 104736 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.519 00:31:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:12.519 00:31:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.519 00:31:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:12.519 00:31:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:12.519 00:31:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.519 00:31:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.519 00:31:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.778 00:31:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.778 00:31:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:13.715 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:13.715 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:13.715 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:13.715 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:13.715 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.715 00:31:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.974 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:13.974 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:13.974 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:13.974 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.974 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:13.974 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.232 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:14.232 00:31:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.168 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:15.169 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:15.169 00:31:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.169 00:31:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.427 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:15.428 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:15.428 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:15.428 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.428 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:15.428 00:31:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.687 00:31:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:15.687 00:31:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.624 00:31:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.624 00:31:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:16.624 00:31:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:16.624 00:31:39 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:16.624 00:31:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.624 00:31:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.624 00:31:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.624 00:31:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.624 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.624 00:05:16.624 real 0m6.231s 00:05:16.624 user 0m0.800s 00:05:16.624 sys 0m3.490s 00:05:16.624 00:31:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.624 00:31:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:16.624 ************************************ 00:05:16.624 END TEST nvme_mount 00:05:16.624 ************************************ 00:05:16.624 00:31:39 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:16.624 00:31:39 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.624 00:31:39 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.624 00:31:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:16.624 ************************************ 00:05:16.624 START TEST dm_mount 00:05:16.624 ************************************ 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:16.624 00:31:39 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:17.561 Creating new GPT entries in memory. 00:05:17.561 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:17.561 other utilities. 00:05:17.561 00:31:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:17.561 00:31:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.561 00:31:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:17.561 00:31:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:17.561 00:31:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:18.938 Creating new GPT entries in memory. 00:05:18.938 The operation has completed successfully. 00:05:18.938 00:31:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:18.938 00:31:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.938 00:31:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.938 00:31:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.938 00:31:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:19.875 The operation has completed successfully. 00:05:19.875 00:31:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.875 00:31:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.875 00:31:42 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 105226 00:05:19.875 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:19.875 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.875 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.875 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:19.875 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.876 00:31:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.135 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:20.135 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:20.135 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:20.135 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.135 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:20.135 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.394 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:20.394 00:31:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.331 00:31:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.590 00:31:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.590 00:31:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:21.590 00:31:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:21.590 00:31:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.590 00:31:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.590 00:31:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.590 00:31:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.590 00:31:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.527 00:31:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.527 00:31:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.527 00:31:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:22.527 00:31:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:22.527 00:31:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.527 00:31:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.527 00:31:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:22.527 00:31:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.527 00:31:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:22.527 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.527 00:31:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.527 00:31:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:22.786 00:05:22.786 real 0m6.055s 00:05:22.786 user 0m0.492s 00:05:22.786 sys 0m2.422s 00:05:22.786 00:31:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.786 00:31:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:22.786 ************************************ 00:05:22.786 END TEST dm_mount 00:05:22.786 ************************************ 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.786 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.786 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.786 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.786 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.786 00:31:45 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:22.786 00:05:22.786 real 0m13.176s 00:05:22.786 user 0m1.721s 00:05:22.786 sys 0m6.377s 00:05:22.786 00:31:45 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.786 00:31:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:22.786 ************************************ 00:05:22.786 END TEST devices 00:05:22.786 ************************************ 00:05:22.786 00:05:22.786 real 0m29.217s 00:05:22.786 user 0m6.691s 00:05:22.786 sys 0m17.934s 00:05:22.786 00:31:45 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.786 00:31:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:22.786 ************************************ 00:05:22.786 END TEST setup.sh 00:05:22.787 ************************************ 00:05:22.787 00:31:45 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:23.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:23.355 Hugepages 00:05:23.355 node hugesize free / total 00:05:23.355 node0 1048576kB 0 / 0 00:05:23.355 node0 2048kB 2048 / 2048 00:05:23.355 00:05:23.355 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:23.355 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:23.614 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:23.614 00:31:46 -- spdk/autotest.sh@130 -- # uname -s 00:05:23.614 00:31:46 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:23.614 00:31:46 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:23.614 00:31:46 -- common/autotest_common.sh@1529 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.181 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:24.181 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.163 00:31:47 -- common/autotest_common.sh@1530 -- # sleep 1 00:05:26.100 00:31:48 -- common/autotest_common.sh@1531 -- # bdfs=() 00:05:26.100 00:31:48 -- common/autotest_common.sh@1531 -- # local bdfs 00:05:26.100 00:31:48 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.100 00:31:48 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:05:26.100 00:31:48 -- common/autotest_common.sh@1511 -- # bdfs=() 00:05:26.100 00:31:48 -- common/autotest_common.sh@1511 -- # local bdfs 00:05:26.100 00:31:48 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.100 00:31:48 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.100 00:31:48 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:05:26.100 00:31:48 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:05:26.100 00:31:48 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 00:05:26.100 00:31:48 -- common/autotest_common.sh@1534 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:26.666 Waiting for block devices as requested 00:05:26.666 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.666 00:31:49 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:05:26.666 00:31:49 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:26.666 00:31:49 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 00:05:26.666 00:31:49 -- common/autotest_common.sh@1500 -- # grep 0000:00:10.0/nvme/nvme 00:05:26.666 00:31:49 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:26.666 00:31:49 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:05:26.666 00:31:49 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:26.666 00:31:49 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:05:26.666 00:31:49 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:05:26.666 00:31:49 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:05:26.666 00:31:49 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:05:26.666 00:31:49 -- common/autotest_common.sh@1543 -- # grep oacs 00:05:26.666 00:31:49 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:05:26.666 00:31:49 -- common/autotest_common.sh@1543 -- # oacs=' 0x12a' 00:05:26.666 00:31:49 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:05:26.666 00:31:49 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:05:26.666 00:31:49 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:05:26.666 00:31:49 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:05:26.666 00:31:49 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:05:26.666 00:31:49 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:05:26.666 00:31:49 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:05:26.666 00:31:49 -- common/autotest_common.sh@1555 -- # continue 00:05:26.666 00:31:49 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:26.666 00:31:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.666 00:31:49 -- common/autotest_common.sh@10 -- # set +x 00:05:26.666 00:31:49 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:26.666 00:31:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.666 00:31:49 -- common/autotest_common.sh@10 -- # set +x 00:05:26.666 00:31:49 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:27.489 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.427 00:31:50 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:28.427 00:31:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.427 00:31:50 -- common/autotest_common.sh@10 -- # set +x 00:05:28.427 00:31:50 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:28.427 00:31:50 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:05:28.427 00:31:50 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:05:28.427 00:31:50 -- common/autotest_common.sh@1575 -- # bdfs=() 00:05:28.427 00:31:50 -- common/autotest_common.sh@1575 -- # local bdfs 00:05:28.427 00:31:50 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:05:28.427 00:31:50 -- common/autotest_common.sh@1511 -- # bdfs=() 00:05:28.427 00:31:50 -- common/autotest_common.sh@1511 -- # local bdfs 00:05:28.427 00:31:50 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.427 00:31:50 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:28.427 00:31:50 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:05:28.427 00:31:50 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:05:28.427 00:31:50 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 00:05:28.427 00:31:50 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:05:28.427 00:31:50 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:28.427 00:31:50 -- common/autotest_common.sh@1578 -- # device=0x0010 00:05:28.427 00:31:50 -- common/autotest_common.sh@1579 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:28.427 00:31:50 -- common/autotest_common.sh@1584 -- # printf '%s\n' 00:05:28.427 00:31:50 -- common/autotest_common.sh@1590 -- # [[ -z '' ]] 00:05:28.427 00:31:50 -- common/autotest_common.sh@1591 -- # return 0 00:05:28.427 00:31:50 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:05:28.427 00:31:50 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:28.427 00:31:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.427 00:31:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.427 00:31:50 -- common/autotest_common.sh@10 -- # set +x 00:05:28.427 ************************************ 00:05:28.427 START TEST unittest 00:05:28.427 ************************************ 00:05:28.427 00:31:50 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:28.427 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:28.427 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:28.427 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:28.427 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:28.427 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:28.427 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:28.427 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:28.427 ++ rpc_py=rpc_cmd 00:05:28.427 ++ set -e 00:05:28.427 ++ shopt -s nullglob 00:05:28.427 ++ shopt -s extglob 00:05:28.427 ++ shopt -s inherit_errexit 00:05:28.427 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:28.427 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:28.427 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:28.427 +++ CONFIG_WPDK_DIR= 00:05:28.427 +++ CONFIG_ASAN=y 00:05:28.427 +++ CONFIG_VBDEV_COMPRESS=n 00:05:28.427 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:28.427 +++ CONFIG_USDT=n 00:05:28.427 +++ CONFIG_CUSTOMOCF=n 00:05:28.427 +++ CONFIG_PREFIX=/usr/local 00:05:28.427 +++ CONFIG_RBD=n 00:05:28.427 +++ CONFIG_LIBDIR= 00:05:28.427 +++ CONFIG_IDXD=y 00:05:28.427 +++ CONFIG_NVME_CUSE=y 00:05:28.427 +++ CONFIG_SMA=n 00:05:28.427 +++ CONFIG_VTUNE=n 00:05:28.427 +++ CONFIG_TSAN=n 00:05:28.427 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:28.427 +++ CONFIG_VFIO_USER_DIR= 00:05:28.427 +++ CONFIG_PGO_CAPTURE=n 00:05:28.427 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:28.427 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:28.427 +++ CONFIG_LTO=n 00:05:28.427 +++ CONFIG_ISCSI_INITIATOR=y 00:05:28.427 +++ CONFIG_CET=n 00:05:28.427 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:28.427 +++ CONFIG_OCF_PATH= 00:05:28.427 +++ CONFIG_RDMA_SET_TOS=y 00:05:28.427 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:28.427 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:28.427 +++ CONFIG_UBLK=n 00:05:28.427 +++ CONFIG_ISAL_CRYPTO=y 00:05:28.427 +++ CONFIG_OPENSSL_PATH= 00:05:28.427 +++ CONFIG_OCF=n 00:05:28.427 +++ CONFIG_FUSE=n 00:05:28.427 +++ CONFIG_VTUNE_DIR= 00:05:28.427 +++ CONFIG_FUZZER_LIB= 00:05:28.427 +++ CONFIG_FUZZER=n 00:05:28.427 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:28.427 +++ CONFIG_CRYPTO=n 00:05:28.427 +++ CONFIG_PGO_USE=n 00:05:28.427 +++ CONFIG_VHOST=y 00:05:28.427 +++ CONFIG_DAOS=n 00:05:28.427 +++ CONFIG_DPDK_INC_DIR= 00:05:28.427 +++ CONFIG_DAOS_DIR= 00:05:28.427 +++ CONFIG_UNIT_TESTS=y 00:05:28.427 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:28.427 +++ CONFIG_VIRTIO=y 00:05:28.427 +++ CONFIG_DPDK_UADK=n 00:05:28.427 +++ CONFIG_COVERAGE=y 00:05:28.427 +++ CONFIG_RDMA=y 00:05:28.427 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:28.427 +++ CONFIG_URING_PATH= 00:05:28.427 +++ CONFIG_XNVME=n 00:05:28.427 +++ CONFIG_VFIO_USER=n 00:05:28.427 +++ CONFIG_ARCH=native 00:05:28.427 +++ CONFIG_HAVE_EVP_MAC=y 00:05:28.427 +++ CONFIG_URING_ZNS=n 00:05:28.427 +++ CONFIG_WERROR=y 00:05:28.427 +++ CONFIG_HAVE_LIBBSD=n 00:05:28.427 +++ CONFIG_UBSAN=y 00:05:28.427 +++ CONFIG_IPSEC_MB_DIR= 00:05:28.427 +++ CONFIG_GOLANG=n 00:05:28.427 +++ CONFIG_ISAL=y 00:05:28.427 +++ CONFIG_IDXD_KERNEL=n 00:05:28.427 +++ CONFIG_DPDK_LIB_DIR= 00:05:28.427 +++ CONFIG_RDMA_PROV=verbs 00:05:28.427 +++ CONFIG_APPS=y 00:05:28.427 +++ CONFIG_SHARED=n 00:05:28.427 +++ CONFIG_HAVE_KEYUTILS=y 00:05:28.427 +++ CONFIG_FC_PATH= 00:05:28.427 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:28.427 +++ CONFIG_FC=n 00:05:28.427 +++ CONFIG_AVAHI=n 00:05:28.427 +++ CONFIG_FIO_PLUGIN=y 00:05:28.427 +++ CONFIG_RAID5F=y 00:05:28.427 +++ CONFIG_EXAMPLES=y 00:05:28.427 +++ CONFIG_TESTS=y 00:05:28.427 +++ CONFIG_CRYPTO_MLX5=n 00:05:28.427 +++ CONFIG_MAX_LCORES=128 00:05:28.427 +++ CONFIG_IPSEC_MB=n 00:05:28.427 +++ CONFIG_PGO_DIR= 00:05:28.427 +++ CONFIG_DEBUG=y 00:05:28.427 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:28.427 +++ CONFIG_CROSS_PREFIX= 00:05:28.427 +++ CONFIG_URING=n 00:05:28.427 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:28.427 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:28.427 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:28.427 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:28.427 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:28.427 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:28.427 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:28.427 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:28.427 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:28.427 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:28.427 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:28.427 +++ VHOST_APP=("$_app_dir/vhost") 00:05:28.427 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:28.427 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:28.427 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:28.427 +++ [[ #ifndef SPDK_CONFIG_H 00:05:28.427 #define SPDK_CONFIG_H 00:05:28.427 #define SPDK_CONFIG_APPS 1 00:05:28.427 #define SPDK_CONFIG_ARCH native 00:05:28.427 #define SPDK_CONFIG_ASAN 1 00:05:28.427 #undef SPDK_CONFIG_AVAHI 00:05:28.427 #undef SPDK_CONFIG_CET 00:05:28.427 #define SPDK_CONFIG_COVERAGE 1 00:05:28.427 #define SPDK_CONFIG_CROSS_PREFIX 00:05:28.427 #undef SPDK_CONFIG_CRYPTO 00:05:28.427 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:28.427 #undef SPDK_CONFIG_CUSTOMOCF 00:05:28.427 #undef SPDK_CONFIG_DAOS 00:05:28.427 #define SPDK_CONFIG_DAOS_DIR 00:05:28.427 #define SPDK_CONFIG_DEBUG 1 00:05:28.427 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:28.427 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:28.427 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:28.427 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:28.427 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:28.427 #undef SPDK_CONFIG_DPDK_UADK 00:05:28.427 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:28.427 #define SPDK_CONFIG_EXAMPLES 1 00:05:28.427 #undef SPDK_CONFIG_FC 00:05:28.427 #define SPDK_CONFIG_FC_PATH 00:05:28.427 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:28.427 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:28.427 #undef SPDK_CONFIG_FUSE 00:05:28.427 #undef SPDK_CONFIG_FUZZER 00:05:28.427 #define SPDK_CONFIG_FUZZER_LIB 00:05:28.427 #undef SPDK_CONFIG_GOLANG 00:05:28.428 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:28.428 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:28.428 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:28.428 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:28.428 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:28.428 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:28.428 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:28.428 #define SPDK_CONFIG_IDXD 1 00:05:28.428 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:28.428 #undef SPDK_CONFIG_IPSEC_MB 00:05:28.428 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:28.428 #define SPDK_CONFIG_ISAL 1 00:05:28.428 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:28.428 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:28.428 #define SPDK_CONFIG_LIBDIR 00:05:28.428 #undef SPDK_CONFIG_LTO 00:05:28.428 #define SPDK_CONFIG_MAX_LCORES 128 00:05:28.428 #define SPDK_CONFIG_NVME_CUSE 1 00:05:28.428 #undef SPDK_CONFIG_OCF 00:05:28.428 #define SPDK_CONFIG_OCF_PATH 00:05:28.428 #define SPDK_CONFIG_OPENSSL_PATH 00:05:28.428 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:28.428 #define SPDK_CONFIG_PGO_DIR 00:05:28.428 #undef SPDK_CONFIG_PGO_USE 00:05:28.428 #define SPDK_CONFIG_PREFIX /usr/local 00:05:28.428 #define SPDK_CONFIG_RAID5F 1 00:05:28.428 #undef SPDK_CONFIG_RBD 00:05:28.428 #define SPDK_CONFIG_RDMA 1 00:05:28.428 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:28.428 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:28.428 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:28.428 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:28.428 #undef SPDK_CONFIG_SHARED 00:05:28.428 #undef SPDK_CONFIG_SMA 00:05:28.428 #define SPDK_CONFIG_TESTS 1 00:05:28.428 #undef SPDK_CONFIG_TSAN 00:05:28.428 #undef SPDK_CONFIG_UBLK 00:05:28.428 #define SPDK_CONFIG_UBSAN 1 00:05:28.428 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:28.428 #undef SPDK_CONFIG_URING 00:05:28.428 #define SPDK_CONFIG_URING_PATH 00:05:28.428 #undef SPDK_CONFIG_URING_ZNS 00:05:28.428 #undef SPDK_CONFIG_USDT 00:05:28.428 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:28.428 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:28.428 #undef SPDK_CONFIG_VFIO_USER 00:05:28.428 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:28.428 #define SPDK_CONFIG_VHOST 1 00:05:28.428 #define SPDK_CONFIG_VIRTIO 1 00:05:28.428 #undef SPDK_CONFIG_VTUNE 00:05:28.428 #define SPDK_CONFIG_VTUNE_DIR 00:05:28.428 #define SPDK_CONFIG_WERROR 1 00:05:28.428 #define SPDK_CONFIG_WPDK_DIR 00:05:28.428 #undef SPDK_CONFIG_XNVME 00:05:28.428 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:28.428 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:28.428 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.428 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:28.428 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.428 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.428 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:28.428 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:28.428 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:28.428 ++++ export PATH 00:05:28.428 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:28.428 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:28.428 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:28.428 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:28.428 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:28.428 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:28.428 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:28.428 +++ TEST_TAG=N/A 00:05:28.428 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:28.428 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:05:28.428 ++++ uname -s 00:05:28.428 +++ PM_OS=Linux 00:05:28.428 +++ MONITOR_RESOURCES_SUDO=() 00:05:28.428 +++ declare -A MONITOR_RESOURCES_SUDO 00:05:28.428 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:28.428 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:28.428 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:28.428 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:28.428 +++ SUDO[0]= 00:05:28.428 +++ SUDO[1]='sudo -E' 00:05:28.428 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:28.428 +++ [[ Linux == FreeBSD ]] 00:05:28.428 +++ [[ Linux == Linux ]] 00:05:28.428 +++ [[ QEMU != QEMU ]] 00:05:28.428 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:05:28.428 ++ : 0 00:05:28.428 ++ export RUN_NIGHTLY 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_RUN_VALGRIND 00:05:28.428 ++ : 1 00:05:28.428 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:28.428 ++ : 1 00:05:28.428 ++ export SPDK_TEST_UNITTEST 00:05:28.428 ++ : 00:05:28.428 ++ export SPDK_TEST_AUTOBUILD 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_RELEASE_BUILD 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_ISAL 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_ISCSI 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:28.428 ++ : 1 00:05:28.428 ++ export SPDK_TEST_NVME 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_NVME_PMR 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_NVME_BP 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_NVME_CLI 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_NVME_CUSE 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_NVME_FDP 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_NVMF 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_VFIOUSER 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_FUZZER 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_FUZZER_SHORT 00:05:28.428 ++ : rdma 00:05:28.428 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_RBD 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_VHOST 00:05:28.428 ++ : 1 00:05:28.428 ++ export SPDK_TEST_BLOCKDEV 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_IOAT 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_BLOBFS 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_VHOST_INIT 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_LVOL 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:28.428 ++ : 1 00:05:28.428 ++ export SPDK_RUN_ASAN 00:05:28.428 ++ : 1 00:05:28.428 ++ export SPDK_RUN_UBSAN 00:05:28.428 ++ : 00:05:28.428 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_RUN_NON_ROOT 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_CRYPTO 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_FTL 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_OCF 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_VMD 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_OPAL 00:05:28.428 ++ : 00:05:28.428 ++ export SPDK_TEST_NATIVE_DPDK 00:05:28.428 ++ : true 00:05:28.428 ++ export SPDK_AUTOTEST_X 00:05:28.428 ++ : 1 00:05:28.428 ++ export SPDK_TEST_RAID5 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_URING 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_USDT 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_USE_IGB_UIO 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_SCHEDULER 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_SCANBUILD 00:05:28.428 ++ : 00:05:28.428 ++ export SPDK_TEST_NVMF_NICS 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_SMA 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_DAOS 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_XNVME 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_ACCEL_DSA 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_ACCEL_IAA 00:05:28.428 ++ : 00:05:28.428 ++ export SPDK_TEST_FUZZER_TARGET 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_TEST_NVMF_MDNS 00:05:28.428 ++ : 0 00:05:28.428 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:28.428 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:28.428 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:28.428 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:28.428 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:28.428 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:28.428 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:28.429 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:28.429 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:28.429 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:28.429 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:28.429 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:28.429 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:28.429 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:28.429 ++ PYTHONDONTWRITEBYTECODE=1 00:05:28.429 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:28.429 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:28.429 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:28.429 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:28.429 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:28.429 ++ rm -rf /var/tmp/asan_suppression_file 00:05:28.429 ++ cat 00:05:28.429 ++ echo leak:libfuse3.so 00:05:28.429 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:28.429 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:28.429 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:28.429 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:28.429 ++ '[' -z /var/spdk/dependencies ']' 00:05:28.429 ++ export DEPENDENCY_DIR 00:05:28.429 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:28.429 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:28.429 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:28.429 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:28.429 ++ export QEMU_BIN= 00:05:28.429 ++ QEMU_BIN= 00:05:28.429 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:28.429 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:28.429 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:28.429 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:28.429 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:28.429 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:28.429 ++ '[' 0 -eq 0 ']' 00:05:28.429 ++ export valgrind= 00:05:28.429 ++ valgrind= 00:05:28.429 +++ uname -s 00:05:28.429 ++ '[' Linux = Linux ']' 00:05:28.429 ++ HUGEMEM=4096 00:05:28.429 ++ export CLEAR_HUGE=yes 00:05:28.429 ++ CLEAR_HUGE=yes 00:05:28.429 ++ [[ 0 -eq 1 ]] 00:05:28.429 ++ [[ 0 -eq 1 ]] 00:05:28.429 ++ MAKE=make 00:05:28.429 +++ nproc 00:05:28.429 ++ MAKEFLAGS=-j10 00:05:28.429 ++ export HUGEMEM=4096 00:05:28.429 ++ HUGEMEM=4096 00:05:28.429 ++ NO_HUGE=() 00:05:28.429 ++ TEST_MODE= 00:05:28.429 ++ [[ -z '' ]] 00:05:28.429 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:28.429 ++ exec 00:05:28.429 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:28.429 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:28.429 ++ set_test_storage 2147483648 00:05:28.429 ++ [[ -v testdir ]] 00:05:28.429 ++ local requested_size=2147483648 00:05:28.429 ++ local mount target_dir 00:05:28.429 ++ local -A mounts fss sizes avails uses 00:05:28.429 ++ local source fs size avail mount use 00:05:28.429 ++ local storage_fallback storage_candidates 00:05:28.429 +++ mktemp -udt spdk.XXXXXX 00:05:28.429 ++ storage_fallback=/tmp/spdk.Sm8QQT 00:05:28.429 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:28.429 ++ [[ -n '' ]] 00:05:28.429 ++ [[ -n '' ]] 00:05:28.429 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.Sm8QQT/tests/unit /tmp/spdk.Sm8QQT 00:05:28.429 ++ requested_size=2214592512 00:05:28.429 ++ read -r source fs size use avail _ mount 00:05:28.429 +++ df -T 00:05:28.429 +++ grep -v Filesystem 00:05:28.429 ++ mounts["$mount"]=tmpfs 00:05:28.429 ++ fss["$mount"]=tmpfs 00:05:28.429 ++ avails["$mount"]=1252601856 00:05:28.429 ++ sizes["$mount"]=1253683200 00:05:28.429 ++ uses["$mount"]=1081344 00:05:28.429 ++ read -r source fs size use avail _ mount 00:05:28.429 ++ mounts["$mount"]=/dev/vda1 00:05:28.429 ++ fss["$mount"]=ext4 00:05:28.429 ++ avails["$mount"]=10128166912 00:05:28.429 ++ sizes["$mount"]=20616794112 00:05:28.429 ++ uses["$mount"]=10471849984 00:05:28.429 ++ read -r source fs size use avail _ mount 00:05:28.429 ++ mounts["$mount"]=tmpfs 00:05:28.429 ++ fss["$mount"]=tmpfs 00:05:28.429 ++ avails["$mount"]=6268403712 00:05:28.429 ++ sizes["$mount"]=6268403712 00:05:28.429 ++ uses["$mount"]=0 00:05:28.429 ++ read -r source fs size use avail _ mount 00:05:28.429 ++ mounts["$mount"]=tmpfs 00:05:28.429 ++ fss["$mount"]=tmpfs 00:05:28.429 ++ avails["$mount"]=5242880 00:05:28.429 ++ sizes["$mount"]=5242880 00:05:28.429 ++ uses["$mount"]=0 00:05:28.429 ++ read -r source fs size use avail _ mount 00:05:28.429 ++ mounts["$mount"]=/dev/vda15 00:05:28.429 ++ fss["$mount"]=vfat 00:05:28.429 ++ avails["$mount"]=103061504 00:05:28.429 ++ sizes["$mount"]=109395968 00:05:28.429 ++ uses["$mount"]=6334464 00:05:28.429 ++ read -r source fs size use avail _ mount 00:05:28.429 ++ mounts["$mount"]=tmpfs 00:05:28.429 ++ fss["$mount"]=tmpfs 00:05:28.429 ++ avails["$mount"]=1253675008 00:05:28.429 ++ sizes["$mount"]=1253679104 00:05:28.429 ++ uses["$mount"]=4096 00:05:28.429 ++ read -r source fs size use avail _ mount 00:05:28.429 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt/output 00:05:28.429 ++ fss["$mount"]=fuse.sshfs 00:05:28.429 ++ avails["$mount"]=92663468032 00:05:28.429 ++ sizes["$mount"]=105088212992 00:05:28.429 ++ uses["$mount"]=7039311872 00:05:28.429 ++ read -r source fs size use avail _ mount 00:05:28.429 ++ printf '* Looking for test storage...\n' 00:05:28.429 * Looking for test storage... 00:05:28.429 ++ local target_space new_size 00:05:28.429 ++ for target_dir in "${storage_candidates[@]}" 00:05:28.429 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:28.429 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:28.429 ++ mount=/ 00:05:28.429 ++ target_space=10128166912 00:05:28.429 ++ (( target_space == 0 || target_space < requested_size )) 00:05:28.429 ++ (( target_space >= requested_size )) 00:05:28.429 ++ [[ ext4 == tmpfs ]] 00:05:28.429 ++ [[ ext4 == ramfs ]] 00:05:28.429 ++ [[ / == / ]] 00:05:28.429 ++ new_size=12686442496 00:05:28.429 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:28.429 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:28.429 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:28.429 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:28.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:28.429 ++ return 0 00:05:28.429 ++ set -o errtrace 00:05:28.429 ++ shopt -s extdebug 00:05:28.429 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:28.429 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:28.429 00:31:51 unittest -- common/autotest_common.sh@1685 -- # true 00:05:28.429 00:31:51 unittest -- common/autotest_common.sh@1687 -- # xtrace_fd 00:05:28.429 00:31:51 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:28.429 00:31:51 unittest -- common/autotest_common.sh@29 -- # exec 00:05:28.429 00:31:51 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:28.429 00:31:51 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:28.429 00:31:51 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:28.429 00:31:51 unittest -- common/autotest_common.sh@18 -- # set -x 00:05:28.429 00:31:51 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:28.429 00:31:51 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:05:28.430 00:31:51 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:05:28.430 00:31:51 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:05:28.430 00:31:51 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@181 -- # hash lcov 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:05:28.689 --rc lcov_branch_coverage=1 00:05:28.689 --rc lcov_function_coverage=1 00:05:28.689 --rc genhtml_branch_coverage=1 00:05:28.689 --rc genhtml_function_coverage=1 00:05:28.689 --rc genhtml_legend=1 00:05:28.689 --rc geninfo_all_blocks=1 00:05:28.689 ' 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:05:28.689 --rc lcov_branch_coverage=1 00:05:28.689 --rc lcov_function_coverage=1 00:05:28.689 --rc genhtml_branch_coverage=1 00:05:28.689 --rc genhtml_function_coverage=1 00:05:28.689 --rc genhtml_legend=1 00:05:28.689 --rc geninfo_all_blocks=1 00:05:28.689 ' 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:05:28.689 --rc lcov_branch_coverage=1 00:05:28.689 --rc lcov_function_coverage=1 00:05:28.689 --rc genhtml_branch_coverage=1 00:05:28.689 --rc genhtml_function_coverage=1 00:05:28.689 --rc genhtml_legend=1 00:05:28.689 --rc geninfo_all_blocks=1 00:05:28.689 --no-external' 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:05:28.689 --rc lcov_branch_coverage=1 00:05:28.689 --rc lcov_function_coverage=1 00:05:28.689 --rc genhtml_branch_coverage=1 00:05:28.689 --rc genhtml_function_coverage=1 00:05:28.689 --rc genhtml_legend=1 00:05:28.689 --rc geninfo_all_blocks=1 00:05:28.689 --no-external' 00:05:28.689 00:31:51 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:35.284 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:35.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:21.963 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:21.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:21.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:21.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:21.964 00:32:42 unittest -- unit/unittest.sh@208 -- # uname -m 00:06:21.964 00:32:42 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:06:21.964 00:32:42 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:21.964 00:32:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.964 00:32:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.964 00:32:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:21.964 ************************************ 00:06:21.964 START TEST unittest_pci_event 00:06:21.964 ************************************ 00:06:21.964 00:32:42 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:21.964 00:06:21.964 00:06:21.964 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.964 http://cunit.sourceforge.net/ 00:06:21.964 00:06:21.964 00:06:21.964 Suite: pci_event 00:06:21.964 Test: test_pci_parse_event ...[2024-07-25 00:32:42.178223] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:21.964 [2024-07-25 00:32:42.179396] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:21.964 passed 00:06:21.964 00:06:21.964 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.964 suites 1 1 n/a 0 0 00:06:21.964 tests 1 1 1 0 0 00:06:21.964 asserts 15 15 15 0 n/a 00:06:21.964 00:06:21.964 Elapsed time = 0.001 seconds 00:06:21.964 00:06:21.964 real 0m0.053s 00:06:21.964 user 0m0.025s 00:06:21.964 sys 0m0.022s 00:06:21.964 00:32:42 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.964 00:32:42 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:06:21.964 ************************************ 00:06:21.964 END TEST unittest_pci_event 00:06:21.964 ************************************ 00:06:21.964 00:32:42 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:21.964 00:32:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.964 00:32:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.964 00:32:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:21.964 ************************************ 00:06:21.964 START TEST unittest_include 00:06:21.964 ************************************ 00:06:21.964 00:32:42 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:21.964 00:06:21.964 00:06:21.964 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.964 http://cunit.sourceforge.net/ 00:06:21.964 00:06:21.964 00:06:21.964 Suite: histogram 00:06:21.964 Test: histogram_test ...passed 00:06:21.964 Test: histogram_merge ...passed 00:06:21.964 00:06:21.964 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.964 suites 1 1 n/a 0 0 00:06:21.964 tests 2 2 2 0 0 00:06:21.964 asserts 50 50 50 0 n/a 00:06:21.964 00:06:21.964 Elapsed time = 0.004 seconds 00:06:21.964 00:06:21.964 real 0m0.040s 00:06:21.964 user 0m0.024s 00:06:21.964 sys 0m0.016s 00:06:21.964 00:32:42 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.964 00:32:42 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:06:21.964 ************************************ 00:06:21.964 END TEST unittest_include 00:06:21.964 ************************************ 00:06:21.964 00:32:42 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:06:21.964 00:32:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.964 00:32:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.964 00:32:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:21.964 ************************************ 00:06:21.964 START TEST unittest_bdev 00:06:21.964 ************************************ 00:06:21.964 00:32:42 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:06:21.965 00:32:42 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:21.965 00:06:21.965 00:06:21.965 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.965 http://cunit.sourceforge.net/ 00:06:21.965 00:06:21.965 00:06:21.965 Suite: bdev 00:06:21.965 Test: bytes_to_blocks_test ...passed 00:06:21.965 Test: num_blocks_test ...passed 00:06:21.965 Test: io_valid_test ...passed 00:06:21.965 Test: open_write_test ...[2024-07-25 00:32:42.508946] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:21.965 [2024-07-25 00:32:42.509856] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:21.965 [2024-07-25 00:32:42.510180] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:21.965 passed 00:06:21.965 Test: claim_test ...passed 00:06:21.965 Test: alias_add_del_test ...[2024-07-25 00:32:42.641122] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:21.965 [2024-07-25 00:32:42.641498] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4663:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:21.965 [2024-07-25 00:32:42.641694] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:21.965 passed 00:06:21.965 Test: get_device_stat_test ...passed 00:06:21.965 Test: bdev_io_types_test ...passed 00:06:21.965 Test: bdev_io_wait_test ...passed 00:06:21.965 Test: bdev_io_spans_split_test ...passed 00:06:21.965 Test: bdev_io_boundary_split_test ...passed 00:06:21.965 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-25 00:32:42.857757] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3214:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:21.965 passed 00:06:21.965 Test: bdev_io_mix_split_test ...passed 00:06:21.965 Test: bdev_io_split_with_io_wait ...passed 00:06:21.965 Test: bdev_io_write_unit_split_test ...[2024-07-25 00:32:42.998304] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:21.965 [2024-07-25 00:32:42.998561] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:21.965 [2024-07-25 00:32:42.998661] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:21.965 [2024-07-25 00:32:42.998788] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:21.965 passed 00:06:21.965 Test: bdev_io_alignment_with_boundary ...passed 00:06:21.965 Test: bdev_io_alignment ...passed 00:06:21.965 Test: bdev_histograms ...passed 00:06:21.965 Test: bdev_write_zeroes ...passed 00:06:21.965 Test: bdev_compare_and_write ...passed 00:06:21.965 Test: bdev_compare ...passed 00:06:21.965 Test: bdev_compare_emulated ...passed 00:06:21.965 Test: bdev_zcopy_write ...passed 00:06:21.965 Test: bdev_zcopy_read ...passed 00:06:21.965 Test: bdev_open_while_hotremove ...passed 00:06:21.965 Test: bdev_close_while_hotremove ...passed 00:06:21.965 Test: bdev_open_ext_test ...[2024-07-25 00:32:43.539353] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:21.965 passed 00:06:21.965 Test: bdev_open_ext_unregister ...[2024-07-25 00:32:43.539940] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:21.965 passed 00:06:21.965 Test: bdev_set_io_timeout ...passed 00:06:21.965 Test: bdev_set_qd_sampling ...passed 00:06:21.965 Test: lba_range_overlap ...passed 00:06:21.965 Test: lock_lba_range_check_ranges ...passed 00:06:21.965 Test: lock_lba_range_with_io_outstanding ...passed 00:06:21.965 Test: lock_lba_range_overlapped ...passed 00:06:21.965 Test: bdev_quiesce ...[2024-07-25 00:32:43.792577] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10186:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:21.965 passed 00:06:21.965 Test: bdev_io_abort ...passed 00:06:21.965 Test: bdev_unmap ...passed 00:06:21.965 Test: bdev_write_zeroes_split_test ...passed 00:06:21.965 Test: bdev_set_options_test ...[2024-07-25 00:32:43.949728] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:21.965 passed 00:06:21.965 Test: bdev_get_memory_domains ...passed 00:06:21.965 Test: bdev_io_ext ...passed 00:06:21.965 Test: bdev_io_ext_no_opts ...passed 00:06:21.965 Test: bdev_io_ext_invalid_opts ...passed 00:06:21.965 Test: bdev_io_ext_split ...passed 00:06:21.965 Test: bdev_io_ext_bounce_buffer ...passed 00:06:21.965 Test: bdev_register_uuid_alias ...[2024-07-25 00:32:44.195117] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 149ae711-b726-45cd-b647-900bf28bab75 already exists 00:06:21.965 [2024-07-25 00:32:44.195408] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:149ae711-b726-45cd-b647-900bf28bab75 alias for bdev bdev0 00:06:21.965 passed 00:06:21.965 Test: bdev_unregister_by_name ...[2024-07-25 00:32:44.218532] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8007:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:21.965 passed 00:06:21.965 Test: for_each_bdev_test ...[2024-07-25 00:32:44.218765] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8015:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:21.965 passed 00:06:21.965 Test: bdev_seek_test ...passed 00:06:21.965 Test: bdev_copy ...passed 00:06:21.965 Test: bdev_copy_split_test ...passed 00:06:21.965 Test: examine_locks ...passed 00:06:21.965 Test: claim_v2_rwo ...[2024-07-25 00:32:44.352733] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.352982] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.353087] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.353230] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.353321] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:21.965 passed 00:06:21.965 Test: claim_v2_rom ...[2024-07-25 00:32:44.353453] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8736:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:21.965 [2024-07-25 00:32:44.353733] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.353889] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.354016] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.354105] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.354263] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8779:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:21.965 [2024-07-25 00:32:44.354413] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:21.965 passed 00:06:21.965 Test: claim_v2_rwm ...[2024-07-25 00:32:44.354633] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8809:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:21.965 [2024-07-25 00:32:44.354782] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.354900] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.355014] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.355106] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.355211] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8829:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.355329] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8809:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:21.965 passed 00:06:21.965 Test: claim_v2_existing_writer ...[2024-07-25 00:32:44.355579] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:21.965 [2024-07-25 00:32:44.355687] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:21.965 passed 00:06:21.965 Test: claim_v2_existing_v1 ...[2024-07-25 00:32:44.355878] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.355994] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.356078] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:21.965 passed 00:06:21.965 Test: claim_v1_existing_v2 ...[2024-07-25 00:32:44.356268] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.356385] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:21.965 [2024-07-25 00:32:44.356491] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:21.965 passed 00:06:21.965 Test: examine_claimed ...[2024-07-25 00:32:44.356842] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:21.965 passed 00:06:21.965 00:06:21.965 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.965 suites 1 1 n/a 0 0 00:06:21.965 tests 59 59 59 0 0 00:06:21.965 asserts 4599 4599 4599 0 n/a 00:06:21.965 00:06:21.965 Elapsed time = 1.957 seconds 00:06:21.966 00:32:44 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:21.966 00:06:21.966 00:06:21.966 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.966 http://cunit.sourceforge.net/ 00:06:21.966 00:06:21.966 00:06:21.966 Suite: nvme 00:06:21.966 Test: test_create_ctrlr ...passed 00:06:21.966 Test: test_reset_ctrlr ...[2024-07-25 00:32:44.416959] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 passed 00:06:21.966 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:21.966 Test: test_failover_ctrlr ...passed 00:06:21.966 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-25 00:32:44.419936] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.420222] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.420471] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 passed 00:06:21.966 Test: test_pending_reset ...[2024-07-25 00:32:44.422492] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.422789] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 passed 00:06:21.966 Test: test_attach_ctrlr ...[2024-07-25 00:32:44.424095] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:21.966 passed 00:06:21.966 Test: test_aer_cb ...passed 00:06:21.966 Test: test_submit_nvme_cmd ...passed 00:06:21.966 Test: test_add_remove_trid ...passed 00:06:21.966 Test: test_abort ...[2024-07-25 00:32:44.428140] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7480:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:21.966 passed 00:06:21.966 Test: test_get_io_qpair ...passed 00:06:21.966 Test: test_bdev_unregister ...passed 00:06:21.966 Test: test_compare_ns ...passed 00:06:21.966 Test: test_init_ana_log_page ...passed 00:06:21.966 Test: test_get_memory_domains ...passed 00:06:21.966 Test: test_reconnect_qpair ...[2024-07-25 00:32:44.431186] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 passed 00:06:21.966 Test: test_create_bdev_ctrlr ...[2024-07-25 00:32:44.431753] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5407:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:21.966 passed 00:06:21.966 Test: test_add_multi_ns_to_bdev ...[2024-07-25 00:32:44.433165] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4574:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:21.966 passed 00:06:21.966 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:21.966 Test: test_admin_path ...passed 00:06:21.966 Test: test_reset_bdev_ctrlr ...passed 00:06:21.966 Test: test_find_io_path ...passed 00:06:21.966 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:21.966 Test: test_retry_io_for_io_path_error ...passed 00:06:21.966 Test: test_retry_io_count ...passed 00:06:21.966 Test: test_concurrent_read_ana_log_page ...passed 00:06:21.966 Test: test_retry_io_for_ana_error ...passed 00:06:21.966 Test: test_check_io_error_resiliency_params ...[2024-07-25 00:32:44.440703] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:21.966 [2024-07-25 00:32:44.440794] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6108:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:21.966 [2024-07-25 00:32:44.440820] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6117:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:21.966 [2024-07-25 00:32:44.440859] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6120:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:21.966 [2024-07-25 00:32:44.440883] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:21.966 [2024-07-25 00:32:44.440923] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:21.966 [2024-07-25 00:32:44.440956] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6112:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:21.966 passed 00:06:21.966 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-25 00:32:44.441010] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6127:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:21.966 [2024-07-25 00:32:44.441050] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6124:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:21.966 passed 00:06:21.966 Test: test_reconnect_ctrlr ...[2024-07-25 00:32:44.442009] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.442151] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.442446] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.442580] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.442728] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 passed 00:06:21.966 Test: test_retry_failover_ctrlr ...[2024-07-25 00:32:44.443117] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 passed 00:06:21.966 Test: test_fail_path ...[2024-07-25 00:32:44.443686] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.443852] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.443988] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.444112] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.444278] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 passed 00:06:21.966 Test: test_nvme_ns_cmp ...passed 00:06:21.966 Test: test_ana_transition ...passed 00:06:21.966 Test: test_set_preferred_path ...passed 00:06:21.966 Test: test_find_next_io_path ...passed 00:06:21.966 Test: test_find_io_path_min_qd ...passed 00:06:21.966 Test: test_disable_auto_failback ...[2024-07-25 00:32:44.446046] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 passed 00:06:21.966 Test: test_set_multipath_policy ...passed 00:06:21.966 Test: test_uuid_generation ...passed 00:06:21.966 Test: test_retry_io_to_same_path ...passed 00:06:21.966 Test: test_race_between_reset_and_disconnected ...passed 00:06:21.966 Test: test_ctrlr_op_rpc ...passed 00:06:21.966 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:21.966 Test: test_disable_enable_ctrlr ...[2024-07-25 00:32:44.449809] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 [2024-07-25 00:32:44.449987] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:21.966 passed 00:06:21.966 Test: test_delete_ctrlr_done ...passed 00:06:21.966 Test: test_ns_remove_during_reset ...passed 00:06:21.966 Test: test_io_path_is_current ...passed 00:06:21.966 00:06:21.966 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.966 suites 1 1 n/a 0 0 00:06:21.966 tests 49 49 49 0 0 00:06:21.966 asserts 3578 3578 3578 0 n/a 00:06:21.966 00:06:21.966 Elapsed time = 0.036 seconds 00:06:21.966 00:32:44 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:21.966 00:06:21.966 00:06:21.966 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.966 http://cunit.sourceforge.net/ 00:06:21.966 00:06:21.966 Test Options 00:06:21.966 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:21.966 00:06:21.966 Suite: raid 00:06:21.966 Test: test_create_raid ...passed 00:06:21.966 Test: test_create_raid_superblock ...passed 00:06:21.966 Test: test_delete_raid ...passed 00:06:21.966 Test: test_create_raid_invalid_args ...[2024-07-25 00:32:44.505576] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1507:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:21.966 [2024-07-25 00:32:44.506164] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1501:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:21.966 [2024-07-25 00:32:44.506915] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1491:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:21.966 [2024-07-25 00:32:44.507207] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3283:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:21.966 [2024-07-25 00:32:44.507322] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3461:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:21.966 [2024-07-25 00:32:44.508451] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3283:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:21.966 [2024-07-25 00:32:44.508503] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3461:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:21.966 passed 00:06:21.966 Test: test_delete_raid_invalid_args ...passed 00:06:21.966 Test: test_io_channel ...passed 00:06:21.966 Test: test_reset_io ...passed 00:06:21.966 Test: test_multi_raid ...passed 00:06:21.966 Test: test_io_type_supported ...passed 00:06:21.966 Test: test_raid_json_dump_info ...passed 00:06:21.966 Test: test_context_size ...passed 00:06:21.967 Test: test_raid_level_conversions ...passed 00:06:21.967 Test: test_raid_io_split ...passed 00:06:21.967 Test: test_raid_process ...passed 00:06:21.967 Test: test_raid_process_with_qos ...passed 00:06:21.967 00:06:21.967 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.967 suites 1 1 n/a 0 0 00:06:21.967 tests 15 15 15 0 0 00:06:21.967 asserts 6602 6602 6602 0 n/a 00:06:21.967 00:06:21.967 Elapsed time = 0.034 seconds 00:06:21.967 00:32:44 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:21.967 00:06:21.967 00:06:21.967 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.967 http://cunit.sourceforge.net/ 00:06:21.967 00:06:21.967 00:06:21.967 Suite: raid_sb 00:06:21.967 Test: test_raid_bdev_write_superblock ...passed 00:06:21.967 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:21.967 Test: test_raid_bdev_parse_superblock ...[2024-07-25 00:32:44.590009] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:21.967 passed 00:06:21.967 Suite: raid_sb_md 00:06:21.967 Test: test_raid_bdev_write_superblock ...passed 00:06:21.967 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:21.967 Test: test_raid_bdev_parse_superblock ...passed 00:06:21.967 Suite: raid_sb_md_interleaved 00:06:21.967 Test: test_raid_bdev_write_superblock ...[2024-07-25 00:32:44.590534] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:21.967 passed 00:06:21.967 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:21.967 Test: test_raid_bdev_parse_superblock ...passed 00:06:21.967 00:06:21.967 [2024-07-25 00:32:44.590844] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:21.967 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.967 suites 3 3 n/a 0 0 00:06:21.967 tests 9 9 9 0 0 00:06:21.967 asserts 139 139 139 0 n/a 00:06:21.967 00:06:21.967 Elapsed time = 0.002 seconds 00:06:21.967 00:32:44 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:22.226 00:06:22.226 00:06:22.226 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.226 http://cunit.sourceforge.net/ 00:06:22.226 00:06:22.226 00:06:22.226 Suite: concat 00:06:22.226 Test: test_concat_start ...passed 00:06:22.226 Test: test_concat_rw ...passed 00:06:22.226 Test: test_concat_null_payload ...passed 00:06:22.226 00:06:22.226 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.226 suites 1 1 n/a 0 0 00:06:22.226 tests 3 3 3 0 0 00:06:22.226 asserts 8460 8460 8460 0 n/a 00:06:22.226 00:06:22.226 Elapsed time = 0.008 seconds 00:06:22.226 00:32:44 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:06:22.226 00:06:22.226 00:06:22.226 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.226 http://cunit.sourceforge.net/ 00:06:22.226 00:06:22.226 00:06:22.226 Suite: raid0 00:06:22.226 Test: test_write_io ...passed 00:06:22.226 Test: test_read_io ...passed 00:06:22.226 Test: test_unmap_io ...passed 00:06:22.226 Test: test_io_failure ...passed 00:06:22.226 Suite: raid0_dif 00:06:22.226 Test: test_write_io ...passed 00:06:22.226 Test: test_read_io ...passed 00:06:22.226 Test: test_unmap_io ...passed 00:06:22.226 Test: test_io_failure ...passed 00:06:22.226 00:06:22.226 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.226 suites 2 2 n/a 0 0 00:06:22.226 tests 8 8 8 0 0 00:06:22.226 asserts 368291 368291 368291 0 n/a 00:06:22.226 00:06:22.226 Elapsed time = 0.161 seconds 00:06:22.485 00:32:44 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:22.485 00:06:22.485 00:06:22.485 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.485 http://cunit.sourceforge.net/ 00:06:22.485 00:06:22.485 00:06:22.485 Suite: raid1 00:06:22.485 Test: test_raid1_start ...passed 00:06:22.485 Test: test_raid1_read_balancing ...passed 00:06:22.485 Test: test_raid1_write_error ...passed 00:06:22.485 Test: test_raid1_read_error ...passed 00:06:22.485 00:06:22.485 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.485 suites 1 1 n/a 0 0 00:06:22.485 tests 4 4 4 0 0 00:06:22.485 asserts 4374 4374 4374 0 n/a 00:06:22.485 00:06:22.485 Elapsed time = 0.006 seconds 00:06:22.485 00:32:44 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:22.485 00:06:22.485 00:06:22.485 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.485 http://cunit.sourceforge.net/ 00:06:22.485 00:06:22.485 00:06:22.485 Suite: zone 00:06:22.485 Test: test_zone_get_operation ...passed 00:06:22.485 Test: test_bdev_zone_get_info ...passed 00:06:22.485 Test: test_bdev_zone_management ...passed 00:06:22.485 Test: test_bdev_zone_append ...passed 00:06:22.485 Test: test_bdev_zone_append_with_md ...passed 00:06:22.485 Test: test_bdev_zone_appendv ...passed 00:06:22.485 Test: test_bdev_zone_appendv_with_md ...passed 00:06:22.485 Test: test_bdev_io_get_append_location ...passed 00:06:22.485 00:06:22.486 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.486 suites 1 1 n/a 0 0 00:06:22.486 tests 8 8 8 0 0 00:06:22.486 asserts 94 94 94 0 n/a 00:06:22.486 00:06:22.486 Elapsed time = 0.001 seconds 00:06:22.486 00:32:44 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:22.486 00:06:22.486 00:06:22.486 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.486 http://cunit.sourceforge.net/ 00:06:22.486 00:06:22.486 00:06:22.486 Suite: gpt_parse 00:06:22.486 Test: test_parse_mbr_and_primary ...[2024-07-25 00:32:45.021617] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:22.486 [2024-07-25 00:32:45.023601] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:22.486 [2024-07-25 00:32:45.023703] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:22.486 [2024-07-25 00:32:45.024098] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:22.486 [2024-07-25 00:32:45.024175] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:22.486 [2024-07-25 00:32:45.024500] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:22.486 passed 00:06:22.486 Test: test_parse_secondary ...[2024-07-25 00:32:45.025455] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:22.486 [2024-07-25 00:32:45.025547] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:22.486 [2024-07-25 00:32:45.025870] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:22.486 [2024-07-25 00:32:45.025950] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:22.486 passed 00:06:22.486 Test: test_check_mbr ...[2024-07-25 00:32:45.026982] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:22.486 [2024-07-25 00:32:45.027062] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:22.486 passed 00:06:22.486 Test: test_read_header ...[2024-07-25 00:32:45.027382] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:22.486 [2024-07-25 00:32:45.027724] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:22.486 [2024-07-25 00:32:45.027834] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:22.486 [2024-07-25 00:32:45.028065] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:22.486 [2024-07-25 00:32:45.028269] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:22.486 [2024-07-25 00:32:45.028404] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:22.486 passed 00:06:22.486 Test: test_read_partitions ...[2024-07-25 00:32:45.028777] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:22.486 [2024-07-25 00:32:45.029126] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:22.486 [2024-07-25 00:32:45.029186] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:22.486 [2024-07-25 00:32:45.029228] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:22.486 [2024-07-25 00:32:45.029828] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:22.486 passed 00:06:22.486 00:06:22.486 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.486 suites 1 1 n/a 0 0 00:06:22.486 tests 5 5 5 0 0 00:06:22.486 asserts 33 33 33 0 n/a 00:06:22.486 00:06:22.486 Elapsed time = 0.009 seconds 00:06:22.486 00:32:45 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:22.486 00:06:22.486 00:06:22.486 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.486 http://cunit.sourceforge.net/ 00:06:22.486 00:06:22.486 00:06:22.486 Suite: bdev_part 00:06:22.486 Test: part_test ...[2024-07-25 00:32:45.076580] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name ecdceb3c-bb7d-5f8d-aa3a-11f02b8f6eda already exists 00:06:22.486 [2024-07-25 00:32:45.076947] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:ecdceb3c-bb7d-5f8d-aa3a-11f02b8f6eda alias for bdev test1 00:06:22.486 passed 00:06:22.486 Test: part_free_test ...passed 00:06:22.746 Test: part_get_io_channel_test ...passed 00:06:22.746 Test: part_construct_ext ...passed 00:06:22.746 00:06:22.746 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.746 suites 1 1 n/a 0 0 00:06:22.746 tests 4 4 4 0 0 00:06:22.746 asserts 48 48 48 0 n/a 00:06:22.746 00:06:22.746 Elapsed time = 0.069 seconds 00:06:22.746 00:32:45 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:22.746 00:06:22.746 00:06:22.746 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.746 http://cunit.sourceforge.net/ 00:06:22.746 00:06:22.746 00:06:22.746 Suite: scsi_nvme_suite 00:06:22.746 Test: scsi_nvme_translate_test ...passed 00:06:22.746 00:06:22.746 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.746 suites 1 1 n/a 0 0 00:06:22.746 tests 1 1 1 0 0 00:06:22.746 asserts 104 104 104 0 n/a 00:06:22.746 00:06:22.746 Elapsed time = 0.000 seconds 00:06:22.746 00:32:45 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:22.746 00:06:22.746 00:06:22.746 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.746 http://cunit.sourceforge.net/ 00:06:22.746 00:06:22.746 00:06:22.746 Suite: lvol 00:06:22.746 Test: ut_lvs_init ...[2024-07-25 00:32:45.228839] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:22.746 [2024-07-25 00:32:45.229301] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:22.746 passed 00:06:22.746 Test: ut_lvol_init ...passed 00:06:22.746 Test: ut_lvol_snapshot ...passed 00:06:22.746 Test: ut_lvol_clone ...passed 00:06:22.746 Test: ut_lvs_destroy ...passed 00:06:22.746 Test: ut_lvs_unload ...passed 00:06:22.746 Test: ut_lvol_resize ...[2024-07-25 00:32:45.231409] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:22.746 passed 00:06:22.746 Test: ut_lvol_set_read_only ...passed 00:06:22.746 Test: ut_lvol_hotremove ...passed 00:06:22.746 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:22.746 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:22.746 Test: ut_lvol_read_write ...passed 00:06:22.746 Test: ut_vbdev_lvol_submit_request ...passed 00:06:22.746 Test: ut_lvol_examine_config ...passed 00:06:22.746 Test: ut_lvol_examine_disk ...[2024-07-25 00:32:45.232204] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:22.746 passed 00:06:22.746 Test: ut_lvol_rename ...[2024-07-25 00:32:45.233499] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:22.746 [2024-07-25 00:32:45.233636] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:22.746 passed 00:06:22.746 Test: ut_bdev_finish ...passed 00:06:22.746 Test: ut_lvs_rename ...passed 00:06:22.746 Test: ut_lvol_seek ...passed 00:06:22.746 Test: ut_esnap_dev_create ...[2024-07-25 00:32:45.234526] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:22.746 [2024-07-25 00:32:45.234608] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:22.746 passed 00:06:22.746 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-25 00:32:45.234643] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:22.746 [2024-07-25 00:32:45.234808] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:22.746 [2024-07-25 00:32:45.234853] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:22.746 passed 00:06:22.746 Test: ut_lvol_shallow_copy ...[2024-07-25 00:32:45.235305] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:06:22.746 [2024-07-25 00:32:45.235358] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:06:22.746 passed 00:06:22.746 Test: ut_lvol_set_external_parent ...[2024-07-25 00:32:45.235546] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:22.746 passed 00:06:22.746 00:06:22.746 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.746 suites 1 1 n/a 0 0 00:06:22.746 tests 23 23 23 0 0 00:06:22.746 asserts 770 770 770 0 n/a 00:06:22.746 00:06:22.746 Elapsed time = 0.007 seconds 00:06:22.746 00:32:45 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:22.746 00:06:22.746 00:06:22.746 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.746 http://cunit.sourceforge.net/ 00:06:22.746 00:06:22.746 00:06:22.746 Suite: zone_block 00:06:22.746 Test: test_zone_block_create ...passed 00:06:22.746 Test: test_zone_block_create_invalid ...[2024-07-25 00:32:45.306040] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:22.746 [2024-07-25 00:32:45.306358] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-25 00:32:45.306549] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:22.746 [2024-07-25 00:32:45.306637] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-25 00:32:45.306829] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:22.747 [2024-07-25 00:32:45.306881] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-25 00:32:45.306981] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:22.747 [2024-07-25 00:32:45.307054] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:22.747 Test: test_get_zone_info ...[2024-07-25 00:32:45.307577] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.307659] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 passed 00:06:22.747 Test: test_supported_io_types ...[2024-07-25 00:32:45.307704] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 passed 00:06:22.747 Test: test_reset_zone ...[2024-07-25 00:32:45.308574] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.308634] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 passed 00:06:22.747 Test: test_open_zone ...[2024-07-25 00:32:45.309132] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.309783] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.309857] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 passed 00:06:22.747 Test: test_zone_write ...[2024-07-25 00:32:45.310472] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:22.747 [2024-07-25 00:32:45.310539] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.310601] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:22.747 [2024-07-25 00:32:45.310655] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.315542] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:22.747 [2024-07-25 00:32:45.315608] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.315669] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:22.747 [2024-07-25 00:32:45.315691] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.320552] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:22.747 [2024-07-25 00:32:45.320615] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 passed 00:06:22.747 Test: test_zone_read ...[2024-07-25 00:32:45.321065] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:22.747 [2024-07-25 00:32:45.321117] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.321178] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:22.747 [2024-07-25 00:32:45.321206] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.321672] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:22.747 [2024-07-25 00:32:45.321709] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 passed 00:06:22.747 Test: test_close_zone ...[2024-07-25 00:32:45.322113] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.322207] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.322493] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.322571] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 passed 00:06:22.747 Test: test_finish_zone ...[2024-07-25 00:32:45.323249] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.323306] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 passed 00:06:22.747 Test: test_append_zone ...[2024-07-25 00:32:45.323708] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:22.747 [2024-07-25 00:32:45.323762] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.323826] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:22.747 [2024-07-25 00:32:45.323865] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 [2024-07-25 00:32:45.333407] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:22.747 [2024-07-25 00:32:45.333451] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:22.747 passed 00:06:22.747 00:06:22.747 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.747 suites 1 1 n/a 0 0 00:06:22.747 tests 11 11 11 0 0 00:06:22.747 asserts 3437 3437 3437 0 n/a 00:06:22.747 00:06:22.747 Elapsed time = 0.029 seconds 00:06:23.005 00:32:45 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:23.005 00:06:23.005 00:06:23.005 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.005 http://cunit.sourceforge.net/ 00:06:23.005 00:06:23.005 00:06:23.005 Suite: bdev 00:06:23.005 Test: basic ...[2024-07-25 00:32:45.468870] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55fe34658b41): Operation not permitted (rc=-1) 00:06:23.005 [2024-07-25 00:32:45.469280] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55fe34658b00): Operation not permitted (rc=-1) 00:06:23.005 [2024-07-25 00:32:45.469353] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55fe34658b41): Operation not permitted (rc=-1) 00:06:23.005 passed 00:06:23.005 Test: unregister_and_close ...passed 00:06:23.005 Test: unregister_and_close_different_threads ...passed 00:06:23.263 Test: basic_qos ...passed 00:06:23.263 Test: put_channel_during_reset ...passed 00:06:23.263 Test: aborted_reset ...passed 00:06:23.521 Test: aborted_reset_no_outstanding_io ...passed 00:06:23.521 Test: io_during_reset ...passed 00:06:23.521 Test: reset_completions ...passed 00:06:23.521 Test: io_during_qos_queue ...passed 00:06:23.779 Test: io_during_qos_reset ...passed 00:06:23.779 Test: enomem ...passed 00:06:23.779 Test: enomem_multi_bdev ...passed 00:06:23.779 Test: enomem_multi_bdev_unregister ...passed 00:06:23.779 Test: enomem_multi_io_target ...passed 00:06:24.037 Test: qos_dynamic_enable ...passed 00:06:24.037 Test: bdev_histograms_mt ...passed 00:06:24.037 Test: bdev_set_io_timeout_mt ...[2024-07-25 00:32:46.568994] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:24.037 passed 00:06:24.037 Test: lock_lba_range_then_submit_io ...[2024-07-25 00:32:46.596934] thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x55fe34658ac0 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:24.037 passed 00:06:24.037 Test: unregister_during_reset ...passed 00:06:24.296 Test: event_notify_and_close ...passed 00:06:24.296 Test: unregister_and_qos_poller ...passed 00:06:24.296 Suite: bdev_wrong_thread 00:06:24.296 Test: spdk_bdev_register_wt ...[2024-07-25 00:32:46.802547] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8535:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x619000158b80 (0x619000158b80) 00:06:24.296 passed 00:06:24.296 Test: spdk_bdev_examine_wt ...[2024-07-25 00:32:46.803049] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x619000158b80 (0x619000158b80) 00:06:24.296 passed 00:06:24.296 00:06:24.296 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.296 suites 2 2 n/a 0 0 00:06:24.296 tests 24 24 24 0 0 00:06:24.296 asserts 621 621 621 0 n/a 00:06:24.296 00:06:24.296 Elapsed time = 1.367 seconds 00:06:24.296 00:06:24.296 real 0m4.474s 00:06:24.296 user 0m2.014s 00:06:24.296 sys 0m2.445s 00:06:24.296 00:32:46 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.296 00:32:46 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:24.296 ************************************ 00:06:24.296 END TEST unittest_bdev 00:06:24.296 ************************************ 00:06:24.296 00:32:46 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:24.296 00:32:46 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:24.296 00:32:46 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:24.296 00:32:46 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:24.296 00:32:46 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:24.296 00:32:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.296 00:32:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.296 00:32:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:24.296 ************************************ 00:06:24.296 START TEST unittest_bdev_raid5f 00:06:24.296 ************************************ 00:06:24.296 00:32:46 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:24.296 00:06:24.296 00:06:24.296 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.296 http://cunit.sourceforge.net/ 00:06:24.296 00:06:24.296 00:06:24.296 Suite: raid5f 00:06:24.296 Test: test_raid5f_start ...passed 00:06:25.237 Test: test_raid5f_submit_read_request ...passed 00:06:25.547 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:30.829 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:57.491 Test: test_raid5f_chunk_write_error ...passed 00:07:09.696 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:13.883 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:52.592 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:52.592 00:07:52.592 Run Summary: Type Total Ran Passed Failed Inactive 00:07:52.592 suites 1 1 n/a 0 0 00:07:52.592 tests 8 8 8 0 0 00:07:52.592 asserts 518158 518158 518158 0 n/a 00:07:52.592 00:07:52.592 Elapsed time = 88.166 seconds 00:07:52.592 00:07:52.592 real 1m28.294s 00:07:52.592 user 1m23.369s 00:07:52.592 sys 0m4.896s 00:07:52.592 00:34:15 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.592 00:34:15 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:07:52.592 ************************************ 00:07:52.592 END TEST unittest_bdev_raid5f 00:07:52.592 ************************************ 00:07:52.852 00:34:15 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:07:52.852 00:34:15 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.852 00:34:15 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.852 00:34:15 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:52.852 ************************************ 00:07:52.852 START TEST unittest_blob_blobfs 00:07:52.852 ************************************ 00:07:52.852 00:34:15 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:07:52.852 00:34:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:52.852 00:34:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:52.852 00:07:52.852 00:07:52.852 CUnit - A unit testing framework for C - Version 2.1-3 00:07:52.852 http://cunit.sourceforge.net/ 00:07:52.852 00:07:52.852 00:07:52.852 Suite: blob_nocopy_noextent 00:07:52.852 Test: blob_init ...[2024-07-25 00:34:15.319587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:52.852 passed 00:07:52.852 Test: blob_thin_provision ...passed 00:07:52.852 Test: blob_read_only ...passed 00:07:52.852 Test: bs_load ...[2024-07-25 00:34:15.449186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:52.852 passed 00:07:52.852 Test: bs_load_custom_cluster_size ...passed 00:07:52.852 Test: bs_load_after_failed_grow ...passed 00:07:52.852 Test: bs_cluster_sz ...[2024-07-25 00:34:15.494832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:52.852 [2024-07-25 00:34:15.495261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:52.852 [2024-07-25 00:34:15.495454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:53.111 passed 00:07:53.111 Test: bs_resize_md ...passed 00:07:53.111 Test: bs_destroy ...passed 00:07:53.111 Test: bs_type ...passed 00:07:53.111 Test: bs_super_block ...passed 00:07:53.111 Test: bs_test_recover_cluster_count ...passed 00:07:53.111 Test: bs_grow_live ...passed 00:07:53.111 Test: bs_grow_live_no_space ...passed 00:07:53.111 Test: bs_test_grow ...passed 00:07:53.111 Test: blob_serialize_test ...passed 00:07:53.111 Test: super_block_crc ...passed 00:07:53.111 Test: blob_thin_prov_write_count_io ...passed 00:07:53.370 Test: blob_thin_prov_unmap_cluster ...passed 00:07:53.370 Test: bs_load_iter_test ...passed 00:07:53.370 Test: blob_relations ...[2024-07-25 00:34:15.809769] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.370 [2024-07-25 00:34:15.809876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.370 [2024-07-25 00:34:15.810792] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.370 [2024-07-25 00:34:15.810857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.370 passed 00:07:53.370 Test: blob_relations2 ...[2024-07-25 00:34:15.832550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.370 [2024-07-25 00:34:15.832627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.370 [2024-07-25 00:34:15.832665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.370 [2024-07-25 00:34:15.832693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.370 [2024-07-25 00:34:15.834033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.370 [2024-07-25 00:34:15.834085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.370 [2024-07-25 00:34:15.834464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.370 [2024-07-25 00:34:15.834512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.370 passed 00:07:53.370 Test: blob_relations3 ...passed 00:07:53.630 Test: blobstore_clean_power_failure ...passed 00:07:53.630 Test: blob_delete_snapshot_power_failure ...[2024-07-25 00:34:16.099877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:53.630 [2024-07-25 00:34:16.119828] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:53.630 [2024-07-25 00:34:16.119926] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.630 [2024-07-25 00:34:16.119970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.630 [2024-07-25 00:34:16.139974] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:53.630 [2024-07-25 00:34:16.140069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:53.630 [2024-07-25 00:34:16.140101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.630 [2024-07-25 00:34:16.140162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.630 [2024-07-25 00:34:16.160457] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:53.630 [2024-07-25 00:34:16.160588] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.630 [2024-07-25 00:34:16.180717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:53.630 [2024-07-25 00:34:16.180853] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.630 [2024-07-25 00:34:16.200987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:53.630 [2024-07-25 00:34:16.201095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.630 passed 00:07:53.630 Test: blob_create_snapshot_power_failure ...[2024-07-25 00:34:16.261248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:53.889 [2024-07-25 00:34:16.300604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:53.889 [2024-07-25 00:34:16.320692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:53.889 passed 00:07:53.889 Test: blob_io_unit ...passed 00:07:53.889 Test: blob_io_unit_compatibility ...passed 00:07:53.889 Test: blob_ext_md_pages ...passed 00:07:53.889 Test: blob_esnap_io_4096_4096 ...passed 00:07:53.889 Test: blob_esnap_io_512_512 ...passed 00:07:54.148 Test: blob_esnap_io_4096_512 ...passed 00:07:54.148 Test: blob_esnap_io_512_4096 ...passed 00:07:54.148 Test: blob_esnap_clone_resize ...passed 00:07:54.148 Suite: blob_bs_nocopy_noextent 00:07:54.148 Test: blob_open ...passed 00:07:54.148 Test: blob_create ...[2024-07-25 00:34:16.755919] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:54.148 passed 00:07:54.407 Test: blob_create_loop ...passed 00:07:54.407 Test: blob_create_fail ...[2024-07-25 00:34:16.896573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:54.407 passed 00:07:54.407 Test: blob_create_internal ...passed 00:07:54.407 Test: blob_create_zero_extent ...passed 00:07:54.666 Test: blob_snapshot ...passed 00:07:54.666 Test: blob_clone ...passed 00:07:54.666 Test: blob_inflate ...[2024-07-25 00:34:17.207655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:54.666 passed 00:07:54.666 Test: blob_delete ...passed 00:07:54.666 Test: blob_resize_test ...[2024-07-25 00:34:17.318536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:54.925 passed 00:07:54.925 Test: blob_resize_thin_test ...passed 00:07:54.925 Test: channel_ops ...passed 00:07:54.925 Test: blob_super ...passed 00:07:54.925 Test: blob_rw_verify_iov ...passed 00:07:55.184 Test: blob_unmap ...passed 00:07:55.184 Test: blob_iter ...passed 00:07:55.184 Test: blob_parse_md ...passed 00:07:55.184 Test: bs_load_pending_removal ...passed 00:07:55.184 Test: bs_unload ...[2024-07-25 00:34:17.834341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:55.442 passed 00:07:55.442 Test: bs_usable_clusters ...passed 00:07:55.442 Test: blob_crc ...[2024-07-25 00:34:17.951407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:55.442 [2024-07-25 00:34:17.951613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:55.442 passed 00:07:55.442 Test: blob_flags ...passed 00:07:55.442 Test: bs_version ...passed 00:07:55.700 Test: blob_set_xattrs_test ...[2024-07-25 00:34:18.125885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:55.701 [2024-07-25 00:34:18.126009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:55.701 passed 00:07:55.701 Test: blob_thin_prov_alloc ...passed 00:07:55.959 Test: blob_insert_cluster_msg_test ...passed 00:07:55.959 Test: blob_thin_prov_rw ...passed 00:07:55.959 Test: blob_thin_prov_rle ...passed 00:07:55.959 Test: blob_thin_prov_rw_iov ...passed 00:07:56.218 Test: blob_snapshot_rw ...passed 00:07:56.218 Test: blob_snapshot_rw_iov ...passed 00:07:56.526 Test: blob_inflate_rw ...passed 00:07:56.526 Test: blob_snapshot_freeze_io ...passed 00:07:56.526 Test: blob_operation_split_rw ...passed 00:07:56.786 Test: blob_operation_split_rw_iov ...passed 00:07:56.786 Test: blob_simultaneous_operations ...[2024-07-25 00:34:19.346961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:56.786 [2024-07-25 00:34:19.347084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.786 [2024-07-25 00:34:19.348521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:56.786 [2024-07-25 00:34:19.348603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.786 [2024-07-25 00:34:19.362731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:56.786 [2024-07-25 00:34:19.362790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.786 [2024-07-25 00:34:19.362904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:56.786 [2024-07-25 00:34:19.362939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.786 passed 00:07:57.045 Test: blob_persist_test ...passed 00:07:57.045 Test: blob_decouple_snapshot ...passed 00:07:57.045 Test: blob_seek_io_unit ...passed 00:07:57.045 Test: blob_nested_freezes ...passed 00:07:57.304 Test: blob_clone_resize ...passed 00:07:57.304 Test: blob_shallow_copy ...[2024-07-25 00:34:19.794572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:57.304 [2024-07-25 00:34:19.794947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:57.304 [2024-07-25 00:34:19.795227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:57.304 passed 00:07:57.304 Suite: blob_blob_nocopy_noextent 00:07:57.304 Test: blob_write ...passed 00:07:57.304 Test: blob_read ...passed 00:07:57.563 Test: blob_rw_verify ...passed 00:07:57.563 Test: blob_rw_verify_iov_nomem ...passed 00:07:57.563 Test: blob_rw_iov_read_only ...passed 00:07:57.563 Test: blob_xattr ...passed 00:07:57.821 Test: blob_dirty_shutdown ...passed 00:07:57.821 Test: blob_is_degraded ...passed 00:07:57.821 Suite: blob_esnap_bs_nocopy_noextent 00:07:57.821 Test: blob_esnap_create ...passed 00:07:57.821 Test: blob_esnap_thread_add_remove ...passed 00:07:57.821 Test: blob_esnap_clone_snapshot ...passed 00:07:57.821 Test: blob_esnap_clone_inflate ...passed 00:07:57.821 Test: blob_esnap_clone_decouple ...passed 00:07:57.821 Test: blob_esnap_clone_reload ...passed 00:07:58.080 Test: blob_esnap_hotplug ...passed 00:07:58.080 Test: blob_set_parent ...[2024-07-25 00:34:20.496307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:58.080 [2024-07-25 00:34:20.496406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:58.080 [2024-07-25 00:34:20.496538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:58.080 [2024-07-25 00:34:20.496583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:58.080 [2024-07-25 00:34:20.497055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:58.080 passed 00:07:58.080 Test: blob_set_external_parent ...[2024-07-25 00:34:20.529998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:58.080 [2024-07-25 00:34:20.530096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:58.080 [2024-07-25 00:34:20.530128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:58.080 [2024-07-25 00:34:20.530555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:58.080 passed 00:07:58.080 Suite: blob_nocopy_extent 00:07:58.080 Test: blob_init ...[2024-07-25 00:34:20.541728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:58.080 passed 00:07:58.080 Test: blob_thin_provision ...passed 00:07:58.080 Test: blob_read_only ...passed 00:07:58.080 Test: bs_load ...[2024-07-25 00:34:20.587168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:58.080 passed 00:07:58.080 Test: bs_load_custom_cluster_size ...passed 00:07:58.080 Test: bs_load_after_failed_grow ...passed 00:07:58.080 Test: bs_cluster_sz ...[2024-07-25 00:34:20.611922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:58.080 [2024-07-25 00:34:20.612185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:58.080 [2024-07-25 00:34:20.612231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:58.080 passed 00:07:58.080 Test: bs_resize_md ...passed 00:07:58.080 Test: bs_destroy ...passed 00:07:58.080 Test: bs_type ...passed 00:07:58.080 Test: bs_super_block ...passed 00:07:58.080 Test: bs_test_recover_cluster_count ...passed 00:07:58.080 Test: bs_grow_live ...passed 00:07:58.080 Test: bs_grow_live_no_space ...passed 00:07:58.080 Test: bs_test_grow ...passed 00:07:58.081 Test: blob_serialize_test ...passed 00:07:58.081 Test: super_block_crc ...passed 00:07:58.339 Test: blob_thin_prov_write_count_io ...passed 00:07:58.339 Test: blob_thin_prov_unmap_cluster ...passed 00:07:58.339 Test: bs_load_iter_test ...passed 00:07:58.339 Test: blob_relations ...[2024-07-25 00:34:20.794979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:58.339 [2024-07-25 00:34:20.795099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.339 [2024-07-25 00:34:20.795949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:58.339 [2024-07-25 00:34:20.796001] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.339 passed 00:07:58.339 Test: blob_relations2 ...[2024-07-25 00:34:20.809570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:58.339 [2024-07-25 00:34:20.809660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.339 [2024-07-25 00:34:20.809688] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:58.339 [2024-07-25 00:34:20.809713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.339 [2024-07-25 00:34:20.810927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:58.339 [2024-07-25 00:34:20.810997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.339 [2024-07-25 00:34:20.811346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:58.339 [2024-07-25 00:34:20.811406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.339 passed 00:07:58.339 Test: blob_relations3 ...passed 00:07:58.339 Test: blobstore_clean_power_failure ...passed 00:07:58.339 Test: blob_delete_snapshot_power_failure ...[2024-07-25 00:34:20.968513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:58.339 [2024-07-25 00:34:20.981038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:58.599 [2024-07-25 00:34:20.993475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:58.599 [2024-07-25 00:34:20.993576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:58.599 [2024-07-25 00:34:20.993614] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.599 [2024-07-25 00:34:21.005861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:58.599 [2024-07-25 00:34:21.005965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:58.599 [2024-07-25 00:34:21.005990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:58.599 [2024-07-25 00:34:21.006026] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.599 [2024-07-25 00:34:21.017924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:58.599 [2024-07-25 00:34:21.018015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:58.599 [2024-07-25 00:34:21.018040] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:58.599 [2024-07-25 00:34:21.018075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.599 [2024-07-25 00:34:21.029877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:58.599 [2024-07-25 00:34:21.029970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.599 [2024-07-25 00:34:21.041999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:58.599 [2024-07-25 00:34:21.042105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.599 [2024-07-25 00:34:21.054230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:58.599 [2024-07-25 00:34:21.054365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:58.599 passed 00:07:58.599 Test: blob_create_snapshot_power_failure ...[2024-07-25 00:34:21.090478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:58.599 [2024-07-25 00:34:21.102248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:58.599 [2024-07-25 00:34:21.125837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:58.599 [2024-07-25 00:34:21.137735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:58.599 passed 00:07:58.599 Test: blob_io_unit ...passed 00:07:58.599 Test: blob_io_unit_compatibility ...passed 00:07:58.599 Test: blob_ext_md_pages ...passed 00:07:58.599 Test: blob_esnap_io_4096_4096 ...passed 00:07:58.858 Test: blob_esnap_io_512_512 ...passed 00:07:58.858 Test: blob_esnap_io_4096_512 ...passed 00:07:58.858 Test: blob_esnap_io_512_4096 ...passed 00:07:58.858 Test: blob_esnap_clone_resize ...passed 00:07:58.858 Suite: blob_bs_nocopy_extent 00:07:58.858 Test: blob_open ...passed 00:07:58.858 Test: blob_create ...[2024-07-25 00:34:21.394265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:58.858 passed 00:07:58.858 Test: blob_create_loop ...passed 00:07:58.858 Test: blob_create_fail ...[2024-07-25 00:34:21.489448] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:58.858 passed 00:07:59.117 Test: blob_create_internal ...passed 00:07:59.117 Test: blob_create_zero_extent ...passed 00:07:59.117 Test: blob_snapshot ...passed 00:07:59.117 Test: blob_clone ...passed 00:07:59.117 Test: blob_inflate ...[2024-07-25 00:34:21.662546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:59.117 passed 00:07:59.117 Test: blob_delete ...passed 00:07:59.117 Test: blob_resize_test ...[2024-07-25 00:34:21.724893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:59.117 passed 00:07:59.375 Test: blob_resize_thin_test ...passed 00:07:59.375 Test: channel_ops ...passed 00:07:59.375 Test: blob_super ...passed 00:07:59.375 Test: blob_rw_verify_iov ...passed 00:07:59.375 Test: blob_unmap ...passed 00:07:59.375 Test: blob_iter ...passed 00:07:59.375 Test: blob_parse_md ...passed 00:07:59.375 Test: bs_load_pending_removal ...passed 00:07:59.375 Test: bs_unload ...[2024-07-25 00:34:22.018584] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:59.634 passed 00:07:59.634 Test: bs_usable_clusters ...passed 00:07:59.634 Test: blob_crc ...[2024-07-25 00:34:22.083823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:59.634 [2024-07-25 00:34:22.083931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:59.634 passed 00:07:59.634 Test: blob_flags ...passed 00:07:59.634 Test: bs_version ...passed 00:07:59.634 Test: blob_set_xattrs_test ...[2024-07-25 00:34:22.182042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:59.634 [2024-07-25 00:34:22.182154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:59.634 passed 00:07:59.893 Test: blob_thin_prov_alloc ...passed 00:07:59.893 Test: blob_insert_cluster_msg_test ...passed 00:07:59.893 Test: blob_thin_prov_rw ...passed 00:07:59.893 Test: blob_thin_prov_rle ...passed 00:07:59.893 Test: blob_thin_prov_rw_iov ...passed 00:07:59.893 Test: blob_snapshot_rw ...passed 00:07:59.893 Test: blob_snapshot_rw_iov ...passed 00:08:00.151 Test: blob_inflate_rw ...passed 00:08:00.151 Test: blob_snapshot_freeze_io ...passed 00:08:00.410 Test: blob_operation_split_rw ...passed 00:08:00.776 Test: blob_operation_split_rw_iov ...passed 00:08:00.776 Test: blob_simultaneous_operations ...[2024-07-25 00:34:23.092865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:00.776 [2024-07-25 00:34:23.092953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.776 [2024-07-25 00:34:23.094231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:00.776 [2024-07-25 00:34:23.094307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.776 [2024-07-25 00:34:23.106278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:00.776 [2024-07-25 00:34:23.106370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.776 [2024-07-25 00:34:23.106471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:00.776 [2024-07-25 00:34:23.106488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.776 passed 00:08:00.776 Test: blob_persist_test ...passed 00:08:00.776 Test: blob_decouple_snapshot ...passed 00:08:00.776 Test: blob_seek_io_unit ...passed 00:08:00.776 Test: blob_nested_freezes ...passed 00:08:00.776 Test: blob_clone_resize ...passed 00:08:01.046 Test: blob_shallow_copy ...[2024-07-25 00:34:23.368778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:01.046 [2024-07-25 00:34:23.369095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:01.046 [2024-07-25 00:34:23.369305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:01.046 passed 00:08:01.046 Suite: blob_blob_nocopy_extent 00:08:01.046 Test: blob_write ...passed 00:08:01.046 Test: blob_read ...passed 00:08:01.046 Test: blob_rw_verify ...passed 00:08:01.046 Test: blob_rw_verify_iov_nomem ...passed 00:08:01.046 Test: blob_rw_iov_read_only ...passed 00:08:01.046 Test: blob_xattr ...passed 00:08:01.046 Test: blob_dirty_shutdown ...passed 00:08:01.046 Test: blob_is_degraded ...passed 00:08:01.046 Suite: blob_esnap_bs_nocopy_extent 00:08:01.046 Test: blob_esnap_create ...passed 00:08:01.304 Test: blob_esnap_thread_add_remove ...passed 00:08:01.304 Test: blob_esnap_clone_snapshot ...passed 00:08:01.304 Test: blob_esnap_clone_inflate ...passed 00:08:01.304 Test: blob_esnap_clone_decouple ...passed 00:08:01.304 Test: blob_esnap_clone_reload ...passed 00:08:01.304 Test: blob_esnap_hotplug ...passed 00:08:01.304 Test: blob_set_parent ...[2024-07-25 00:34:23.889198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:01.304 [2024-07-25 00:34:23.889291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:01.304 [2024-07-25 00:34:23.889387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:01.304 [2024-07-25 00:34:23.889417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:01.304 [2024-07-25 00:34:23.889783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:01.304 passed 00:08:01.304 Test: blob_set_external_parent ...[2024-07-25 00:34:23.922381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:01.304 [2024-07-25 00:34:23.922456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:01.304 [2024-07-25 00:34:23.922476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:01.304 [2024-07-25 00:34:23.922812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:01.304 passed 00:08:01.304 Suite: blob_copy_noextent 00:08:01.304 Test: blob_init ...[2024-07-25 00:34:23.933887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:01.304 passed 00:08:01.304 Test: blob_thin_provision ...passed 00:08:01.562 Test: blob_read_only ...passed 00:08:01.562 Test: bs_load ...[2024-07-25 00:34:23.978667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:01.562 passed 00:08:01.562 Test: bs_load_custom_cluster_size ...passed 00:08:01.562 Test: bs_load_after_failed_grow ...passed 00:08:01.562 Test: bs_cluster_sz ...[2024-07-25 00:34:24.002605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:01.562 [2024-07-25 00:34:24.002789] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:01.562 [2024-07-25 00:34:24.002824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:01.562 passed 00:08:01.562 Test: bs_resize_md ...passed 00:08:01.562 Test: bs_destroy ...passed 00:08:01.562 Test: bs_type ...passed 00:08:01.562 Test: bs_super_block ...passed 00:08:01.562 Test: bs_test_recover_cluster_count ...passed 00:08:01.562 Test: bs_grow_live ...passed 00:08:01.562 Test: bs_grow_live_no_space ...passed 00:08:01.562 Test: bs_test_grow ...passed 00:08:01.562 Test: blob_serialize_test ...passed 00:08:01.562 Test: super_block_crc ...passed 00:08:01.562 Test: blob_thin_prov_write_count_io ...passed 00:08:01.562 Test: blob_thin_prov_unmap_cluster ...passed 00:08:01.562 Test: bs_load_iter_test ...passed 00:08:01.562 Test: blob_relations ...[2024-07-25 00:34:24.188243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:01.562 [2024-07-25 00:34:24.188330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.562 [2024-07-25 00:34:24.188843] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:01.562 [2024-07-25 00:34:24.188876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.562 passed 00:08:01.562 Test: blob_relations2 ...[2024-07-25 00:34:24.201624] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:01.562 [2024-07-25 00:34:24.201694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.562 [2024-07-25 00:34:24.201721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:01.562 [2024-07-25 00:34:24.201735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.562 [2024-07-25 00:34:24.202560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:01.562 [2024-07-25 00:34:24.202609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.562 [2024-07-25 00:34:24.202862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:01.562 [2024-07-25 00:34:24.202897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.562 passed 00:08:01.820 Test: blob_relations3 ...passed 00:08:01.820 Test: blobstore_clean_power_failure ...passed 00:08:01.821 Test: blob_delete_snapshot_power_failure ...[2024-07-25 00:34:24.357079] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:01.821 [2024-07-25 00:34:24.368751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:01.821 [2024-07-25 00:34:24.368847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:01.821 [2024-07-25 00:34:24.368872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.821 [2024-07-25 00:34:24.380526] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:01.821 [2024-07-25 00:34:24.380603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:01.821 [2024-07-25 00:34:24.380624] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:01.821 [2024-07-25 00:34:24.380655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.821 [2024-07-25 00:34:24.392340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:01.821 [2024-07-25 00:34:24.392434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.821 [2024-07-25 00:34:24.404190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:01.821 [2024-07-25 00:34:24.404300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.821 [2024-07-25 00:34:24.416044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:01.821 [2024-07-25 00:34:24.416127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.821 passed 00:08:01.821 Test: blob_create_snapshot_power_failure ...[2024-07-25 00:34:24.450546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:01.821 [2024-07-25 00:34:24.473258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:02.079 [2024-07-25 00:34:24.484833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:02.079 passed 00:08:02.079 Test: blob_io_unit ...passed 00:08:02.079 Test: blob_io_unit_compatibility ...passed 00:08:02.079 Test: blob_ext_md_pages ...passed 00:08:02.079 Test: blob_esnap_io_4096_4096 ...passed 00:08:02.079 Test: blob_esnap_io_512_512 ...passed 00:08:02.079 Test: blob_esnap_io_4096_512 ...passed 00:08:02.079 Test: blob_esnap_io_512_4096 ...passed 00:08:02.079 Test: blob_esnap_clone_resize ...passed 00:08:02.079 Suite: blob_bs_copy_noextent 00:08:02.079 Test: blob_open ...passed 00:08:02.337 Test: blob_create ...[2024-07-25 00:34:24.739257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:02.337 passed 00:08:02.337 Test: blob_create_loop ...passed 00:08:02.337 Test: blob_create_fail ...[2024-07-25 00:34:24.826244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:02.337 passed 00:08:02.337 Test: blob_create_internal ...passed 00:08:02.337 Test: blob_create_zero_extent ...passed 00:08:02.337 Test: blob_snapshot ...passed 00:08:02.337 Test: blob_clone ...passed 00:08:02.595 Test: blob_inflate ...[2024-07-25 00:34:24.991835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:02.595 passed 00:08:02.595 Test: blob_delete ...passed 00:08:02.595 Test: blob_resize_test ...[2024-07-25 00:34:25.055499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:02.595 passed 00:08:02.596 Test: blob_resize_thin_test ...passed 00:08:02.596 Test: channel_ops ...passed 00:08:02.596 Test: blob_super ...passed 00:08:02.596 Test: blob_rw_verify_iov ...passed 00:08:02.596 Test: blob_unmap ...passed 00:08:02.861 Test: blob_iter ...passed 00:08:02.861 Test: blob_parse_md ...passed 00:08:02.861 Test: bs_load_pending_removal ...passed 00:08:02.861 Test: bs_unload ...[2024-07-25 00:34:25.348533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:02.861 passed 00:08:02.861 Test: bs_usable_clusters ...passed 00:08:02.861 Test: blob_crc ...[2024-07-25 00:34:25.412217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:02.861 [2024-07-25 00:34:25.412349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:02.862 passed 00:08:02.862 Test: blob_flags ...passed 00:08:02.862 Test: bs_version ...passed 00:08:02.862 Test: blob_set_xattrs_test ...[2024-07-25 00:34:25.510599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:02.862 [2024-07-25 00:34:25.510726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:03.127 passed 00:08:03.127 Test: blob_thin_prov_alloc ...passed 00:08:03.127 Test: blob_insert_cluster_msg_test ...passed 00:08:03.127 Test: blob_thin_prov_rw ...passed 00:08:03.127 Test: blob_thin_prov_rle ...passed 00:08:03.384 Test: blob_thin_prov_rw_iov ...passed 00:08:03.384 Test: blob_snapshot_rw ...passed 00:08:03.384 Test: blob_snapshot_rw_iov ...passed 00:08:03.642 Test: blob_inflate_rw ...passed 00:08:03.642 Test: blob_snapshot_freeze_io ...passed 00:08:03.642 Test: blob_operation_split_rw ...passed 00:08:03.899 Test: blob_operation_split_rw_iov ...passed 00:08:03.899 Test: blob_simultaneous_operations ...[2024-07-25 00:34:26.428813] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:03.899 [2024-07-25 00:34:26.428918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:03.899 [2024-07-25 00:34:26.429293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:03.899 [2024-07-25 00:34:26.429333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:03.899 [2024-07-25 00:34:26.431895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:03.899 [2024-07-25 00:34:26.431940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:03.899 [2024-07-25 00:34:26.432020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:03.899 [2024-07-25 00:34:26.432035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:03.899 passed 00:08:03.899 Test: blob_persist_test ...passed 00:08:03.899 Test: blob_decouple_snapshot ...passed 00:08:03.899 Test: blob_seek_io_unit ...passed 00:08:04.155 Test: blob_nested_freezes ...passed 00:08:04.156 Test: blob_clone_resize ...passed 00:08:04.156 Test: blob_shallow_copy ...[2024-07-25 00:34:26.655181] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:04.156 [2024-07-25 00:34:26.655481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:04.156 [2024-07-25 00:34:26.655693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:04.156 passed 00:08:04.156 Suite: blob_blob_copy_noextent 00:08:04.156 Test: blob_write ...passed 00:08:04.156 Test: blob_read ...passed 00:08:04.156 Test: blob_rw_verify ...passed 00:08:04.156 Test: blob_rw_verify_iov_nomem ...passed 00:08:04.412 Test: blob_rw_iov_read_only ...passed 00:08:04.412 Test: blob_xattr ...passed 00:08:04.412 Test: blob_dirty_shutdown ...passed 00:08:04.412 Test: blob_is_degraded ...passed 00:08:04.412 Suite: blob_esnap_bs_copy_noextent 00:08:04.412 Test: blob_esnap_create ...passed 00:08:04.412 Test: blob_esnap_thread_add_remove ...passed 00:08:04.412 Test: blob_esnap_clone_snapshot ...passed 00:08:04.412 Test: blob_esnap_clone_inflate ...passed 00:08:04.669 Test: blob_esnap_clone_decouple ...passed 00:08:04.669 Test: blob_esnap_clone_reload ...passed 00:08:04.669 Test: blob_esnap_hotplug ...passed 00:08:04.669 Test: blob_set_parent ...[2024-07-25 00:34:27.163462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:04.669 [2024-07-25 00:34:27.163576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:04.669 [2024-07-25 00:34:27.163672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:04.669 [2024-07-25 00:34:27.163707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:04.669 [2024-07-25 00:34:27.164020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:04.669 passed 00:08:04.669 Test: blob_set_external_parent ...[2024-07-25 00:34:27.195205] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:04.669 [2024-07-25 00:34:27.195296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:04.669 [2024-07-25 00:34:27.195316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:04.669 [2024-07-25 00:34:27.195571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:04.669 passed 00:08:04.669 Suite: blob_copy_extent 00:08:04.669 Test: blob_init ...[2024-07-25 00:34:27.206115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:04.669 passed 00:08:04.669 Test: blob_thin_provision ...passed 00:08:04.669 Test: blob_read_only ...passed 00:08:04.669 Test: bs_load ...[2024-07-25 00:34:27.248375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:04.669 passed 00:08:04.669 Test: bs_load_custom_cluster_size ...passed 00:08:04.669 Test: bs_load_after_failed_grow ...passed 00:08:04.669 Test: bs_cluster_sz ...[2024-07-25 00:34:27.270823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:04.669 [2024-07-25 00:34:27.271008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:04.669 [2024-07-25 00:34:27.271042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:04.669 passed 00:08:04.669 Test: bs_resize_md ...passed 00:08:04.669 Test: bs_destroy ...passed 00:08:04.927 Test: bs_type ...passed 00:08:04.927 Test: bs_super_block ...passed 00:08:04.927 Test: bs_test_recover_cluster_count ...passed 00:08:04.927 Test: bs_grow_live ...passed 00:08:04.927 Test: bs_grow_live_no_space ...passed 00:08:04.927 Test: bs_test_grow ...passed 00:08:04.927 Test: blob_serialize_test ...passed 00:08:04.927 Test: super_block_crc ...passed 00:08:04.927 Test: blob_thin_prov_write_count_io ...passed 00:08:04.927 Test: blob_thin_prov_unmap_cluster ...passed 00:08:04.927 Test: bs_load_iter_test ...passed 00:08:04.927 Test: blob_relations ...[2024-07-25 00:34:27.433990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:04.927 [2024-07-25 00:34:27.434087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:04.927 [2024-07-25 00:34:27.434743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:04.927 [2024-07-25 00:34:27.434788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:04.927 passed 00:08:04.927 Test: blob_relations2 ...[2024-07-25 00:34:27.447768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:04.927 [2024-07-25 00:34:27.447833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:04.927 [2024-07-25 00:34:27.447863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:04.927 [2024-07-25 00:34:27.447879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:04.927 [2024-07-25 00:34:27.448654] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:04.927 [2024-07-25 00:34:27.448694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:04.927 [2024-07-25 00:34:27.448965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:04.927 [2024-07-25 00:34:27.448998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:04.927 passed 00:08:04.927 Test: blob_relations3 ...passed 00:08:05.197 Test: blobstore_clean_power_failure ...passed 00:08:05.197 Test: blob_delete_snapshot_power_failure ...[2024-07-25 00:34:27.596376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:05.197 [2024-07-25 00:34:27.607599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:05.197 [2024-07-25 00:34:27.618803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:05.197 [2024-07-25 00:34:27.618887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:05.197 [2024-07-25 00:34:27.618911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.197 [2024-07-25 00:34:27.629936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:05.197 [2024-07-25 00:34:27.630016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:05.197 [2024-07-25 00:34:27.630036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:05.197 [2024-07-25 00:34:27.630059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.197 [2024-07-25 00:34:27.641272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:05.197 [2024-07-25 00:34:27.643840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:05.197 [2024-07-25 00:34:27.643881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:05.197 [2024-07-25 00:34:27.643908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.197 [2024-07-25 00:34:27.655171] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:05.197 [2024-07-25 00:34:27.655254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.197 [2024-07-25 00:34:27.666570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:05.197 [2024-07-25 00:34:27.666677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.197 [2024-07-25 00:34:27.678132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:05.197 [2024-07-25 00:34:27.678242] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.197 passed 00:08:05.197 Test: blob_create_snapshot_power_failure ...[2024-07-25 00:34:27.712774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:05.197 [2024-07-25 00:34:27.724045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:05.197 [2024-07-25 00:34:27.746125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:05.197 [2024-07-25 00:34:27.757500] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:05.197 passed 00:08:05.197 Test: blob_io_unit ...passed 00:08:05.197 Test: blob_io_unit_compatibility ...passed 00:08:05.197 Test: blob_ext_md_pages ...passed 00:08:05.467 Test: blob_esnap_io_4096_4096 ...passed 00:08:05.467 Test: blob_esnap_io_512_512 ...passed 00:08:05.467 Test: blob_esnap_io_4096_512 ...passed 00:08:05.467 Test: blob_esnap_io_512_4096 ...passed 00:08:05.467 Test: blob_esnap_clone_resize ...passed 00:08:05.467 Suite: blob_bs_copy_extent 00:08:05.467 Test: blob_open ...passed 00:08:05.467 Test: blob_create ...[2024-07-25 00:34:28.009074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:05.467 passed 00:08:05.467 Test: blob_create_loop ...passed 00:08:05.467 Test: blob_create_fail ...[2024-07-25 00:34:28.097212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:05.467 passed 00:08:05.723 Test: blob_create_internal ...passed 00:08:05.724 Test: blob_create_zero_extent ...passed 00:08:05.724 Test: blob_snapshot ...passed 00:08:05.724 Test: blob_clone ...passed 00:08:05.724 Test: blob_inflate ...[2024-07-25 00:34:28.260693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:05.724 passed 00:08:05.724 Test: blob_delete ...passed 00:08:05.724 Test: blob_resize_test ...[2024-07-25 00:34:28.321573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:05.724 passed 00:08:05.724 Test: blob_resize_thin_test ...passed 00:08:05.980 Test: channel_ops ...passed 00:08:05.980 Test: blob_super ...passed 00:08:05.980 Test: blob_rw_verify_iov ...passed 00:08:05.980 Test: blob_unmap ...passed 00:08:05.980 Test: blob_iter ...passed 00:08:05.980 Test: blob_parse_md ...passed 00:08:05.980 Test: bs_load_pending_removal ...passed 00:08:05.980 Test: bs_unload ...[2024-07-25 00:34:28.614607] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:05.980 passed 00:08:06.238 Test: bs_usable_clusters ...passed 00:08:06.238 Test: blob_crc ...[2024-07-25 00:34:28.688108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:06.238 [2024-07-25 00:34:28.688290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:06.238 passed 00:08:06.238 Test: blob_flags ...passed 00:08:06.238 Test: bs_version ...passed 00:08:06.238 Test: blob_set_xattrs_test ...[2024-07-25 00:34:28.867436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:06.238 [2024-07-25 00:34:28.867591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:06.238 passed 00:08:06.495 Test: blob_thin_prov_alloc ...passed 00:08:06.495 Test: blob_insert_cluster_msg_test ...passed 00:08:06.753 Test: blob_thin_prov_rw ...passed 00:08:06.753 Test: blob_thin_prov_rle ...passed 00:08:06.753 Test: blob_thin_prov_rw_iov ...passed 00:08:06.753 Test: blob_snapshot_rw ...passed 00:08:07.011 Test: blob_snapshot_rw_iov ...passed 00:08:07.011 Test: blob_inflate_rw ...passed 00:08:07.268 Test: blob_snapshot_freeze_io ...passed 00:08:07.268 Test: blob_operation_split_rw ...passed 00:08:07.526 Test: blob_operation_split_rw_iov ...passed 00:08:07.526 Test: blob_simultaneous_operations ...[2024-07-25 00:34:30.063066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:07.526 [2024-07-25 00:34:30.063191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.526 [2024-07-25 00:34:30.063742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:07.526 [2024-07-25 00:34:30.063797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.526 [2024-07-25 00:34:30.066969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:07.526 [2024-07-25 00:34:30.067024] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.526 [2024-07-25 00:34:30.067121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:07.526 [2024-07-25 00:34:30.067141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.526 passed 00:08:07.526 Test: blob_persist_test ...passed 00:08:07.783 Test: blob_decouple_snapshot ...passed 00:08:07.783 Test: blob_seek_io_unit ...passed 00:08:07.784 Test: blob_nested_freezes ...passed 00:08:07.784 Test: blob_clone_resize ...passed 00:08:08.041 Test: blob_shallow_copy ...[2024-07-25 00:34:30.456301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:08.041 [2024-07-25 00:34:30.456686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:08.041 [2024-07-25 00:34:30.456960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:08.041 passed 00:08:08.041 Suite: blob_blob_copy_extent 00:08:08.041 Test: blob_write ...passed 00:08:08.041 Test: blob_read ...passed 00:08:08.041 Test: blob_rw_verify ...passed 00:08:08.298 Test: blob_rw_verify_iov_nomem ...passed 00:08:08.298 Test: blob_rw_iov_read_only ...passed 00:08:08.298 Test: blob_xattr ...passed 00:08:08.298 Test: blob_dirty_shutdown ...passed 00:08:08.298 Test: blob_is_degraded ...passed 00:08:08.298 Suite: blob_esnap_bs_copy_extent 00:08:08.556 Test: blob_esnap_create ...passed 00:08:08.556 Test: blob_esnap_thread_add_remove ...passed 00:08:08.556 Test: blob_esnap_clone_snapshot ...passed 00:08:08.556 Test: blob_esnap_clone_inflate ...passed 00:08:08.814 Test: blob_esnap_clone_decouple ...passed 00:08:08.814 Test: blob_esnap_clone_reload ...passed 00:08:08.814 Test: blob_esnap_hotplug ...passed 00:08:08.814 Test: blob_set_parent ...[2024-07-25 00:34:31.385183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:08.814 [2024-07-25 00:34:31.385299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:08.814 [2024-07-25 00:34:31.385418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:08.814 [2024-07-25 00:34:31.385460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:08.814 [2024-07-25 00:34:31.385915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:08.814 passed 00:08:08.814 Test: blob_set_external_parent ...[2024-07-25 00:34:31.442894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:08.814 [2024-07-25 00:34:31.443033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:08.814 [2024-07-25 00:34:31.443066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:08.814 [2024-07-25 00:34:31.443483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:08.814 passed 00:08:08.814 00:08:08.814 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.814 suites 16 16 n/a 0 0 00:08:08.814 tests 376 376 376 0 0 00:08:08.814 asserts 143973 143973 143973 0 n/a 00:08:08.814 00:08:08.814 Elapsed time = 16.141 seconds 00:08:09.073 00:34:31 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:08:09.073 00:08:09.073 00:08:09.073 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.073 http://cunit.sourceforge.net/ 00:08:09.073 00:08:09.073 00:08:09.073 Suite: blob_bdev 00:08:09.073 Test: create_bs_dev ...passed 00:08:09.073 Test: create_bs_dev_ro ...[2024-07-25 00:34:31.581053] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:08:09.073 passed 00:08:09.073 Test: create_bs_dev_rw ...passed 00:08:09.073 Test: claim_bs_dev ...passed 00:08:09.073 Test: claim_bs_dev_ro ...[2024-07-25 00:34:31.581490] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:08:09.073 passed 00:08:09.073 Test: deferred_destroy_refs ...passed 00:08:09.073 Test: deferred_destroy_channels ...passed 00:08:09.073 Test: deferred_destroy_threads ...passed 00:08:09.073 00:08:09.073 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.073 suites 1 1 n/a 0 0 00:08:09.073 tests 8 8 8 0 0 00:08:09.073 asserts 119 119 119 0 n/a 00:08:09.073 00:08:09.073 Elapsed time = 0.001 seconds 00:08:09.073 00:34:31 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:08:09.073 00:08:09.073 00:08:09.073 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.073 http://cunit.sourceforge.net/ 00:08:09.073 00:08:09.073 00:08:09.073 Suite: tree 00:08:09.073 Test: blobfs_tree_op_test ...passed 00:08:09.073 00:08:09.073 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.073 suites 1 1 n/a 0 0 00:08:09.073 tests 1 1 1 0 0 00:08:09.073 asserts 27 27 27 0 n/a 00:08:09.073 00:08:09.073 Elapsed time = 0.000 seconds 00:08:09.073 00:34:31 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:08:09.073 00:08:09.073 00:08:09.073 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.073 http://cunit.sourceforge.net/ 00:08:09.073 00:08:09.073 00:08:09.073 Suite: blobfs_async_ut 00:08:09.331 Test: fs_init ...passed 00:08:09.331 Test: fs_open ...passed 00:08:09.331 Test: fs_create ...passed 00:08:09.331 Test: fs_truncate ...passed 00:08:09.331 Test: fs_rename ...[2024-07-25 00:34:31.838801] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:08:09.331 passed 00:08:09.331 Test: fs_rw_async ...passed 00:08:09.331 Test: fs_writev_readv_async ...passed 00:08:09.331 Test: tree_find_buffer_ut ...passed 00:08:09.331 Test: channel_ops ...passed 00:08:09.331 Test: channel_ops_sync ...passed 00:08:09.331 00:08:09.331 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.331 suites 1 1 n/a 0 0 00:08:09.331 tests 10 10 10 0 0 00:08:09.331 asserts 292 292 292 0 n/a 00:08:09.331 00:08:09.331 Elapsed time = 0.243 seconds 00:08:09.331 00:34:31 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:08:09.589 00:08:09.589 00:08:09.589 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.589 http://cunit.sourceforge.net/ 00:08:09.589 00:08:09.589 00:08:09.589 Suite: blobfs_sync_ut 00:08:09.589 Test: cache_read_after_write ...[2024-07-25 00:34:32.080165] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:08:09.589 passed 00:08:09.589 Test: file_length ...passed 00:08:09.589 Test: append_write_to_extend_blob ...passed 00:08:09.589 Test: partial_buffer ...passed 00:08:09.589 Test: cache_write_null_buffer ...passed 00:08:09.589 Test: fs_create_sync ...passed 00:08:09.589 Test: fs_rename_sync ...passed 00:08:09.589 Test: cache_append_no_cache ...passed 00:08:09.589 Test: fs_delete_file_without_close ...passed 00:08:09.589 00:08:09.589 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.589 suites 1 1 n/a 0 0 00:08:09.589 tests 9 9 9 0 0 00:08:09.589 asserts 345 345 345 0 n/a 00:08:09.589 00:08:09.589 Elapsed time = 0.404 seconds 00:08:09.847 00:34:32 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:08:09.847 00:08:09.847 00:08:09.847 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.847 http://cunit.sourceforge.net/ 00:08:09.847 00:08:09.847 00:08:09.847 Suite: blobfs_bdev_ut 00:08:09.847 Test: spdk_blobfs_bdev_detect_test ...[2024-07-25 00:34:32.280665] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:09.847 passed 00:08:09.847 Test: spdk_blobfs_bdev_create_test ...[2024-07-25 00:34:32.281148] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:09.847 passed 00:08:09.847 Test: spdk_blobfs_bdev_mount_test ...passed 00:08:09.847 00:08:09.847 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.847 suites 1 1 n/a 0 0 00:08:09.847 tests 3 3 3 0 0 00:08:09.847 asserts 9 9 9 0 n/a 00:08:09.847 00:08:09.847 Elapsed time = 0.001 seconds 00:08:09.847 00:08:09.847 real 0m17.017s 00:08:09.847 user 0m16.285s 00:08:09.847 sys 0m0.952s 00:08:09.847 00:34:32 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.847 00:34:32 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.847 ************************************ 00:08:09.847 END TEST unittest_blob_blobfs 00:08:09.847 ************************************ 00:08:09.847 00:34:32 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:08:09.847 00:34:32 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.847 00:34:32 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.847 00:34:32 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:09.847 ************************************ 00:08:09.847 START TEST unittest_event 00:08:09.847 ************************************ 00:08:09.847 00:34:32 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:08:09.847 00:34:32 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:08:09.847 00:08:09.847 00:08:09.847 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.847 http://cunit.sourceforge.net/ 00:08:09.847 00:08:09.847 00:08:09.847 Suite: app_suite 00:08:09.847 Test: test_spdk_app_parse_args ...app_ut [options] 00:08:09.847 00:08:09.847 CPU options: 00:08:09.847 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:09.847 (like [0,1,10]) 00:08:09.847 --lcores lcore to CPU mapping list. The list is in the format: 00:08:09.847 [<,lcores[@CPUs]>...] 00:08:09.847 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:09.847 Within the group, '-' is used for range separator, 00:08:09.847 ',' is used for single number separator. 00:08:09.847 '( )' can be omitted for single element group, 00:08:09.847 '@' can be omitted if cpus and lcores have the same value 00:08:09.847 --disable-cpumask-locks Disable CPU core lock files. 00:08:09.847 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:09.847 pollers in the app support interrupt mode) 00:08:09.847 -p, --main-core main (primary) core for DPDK 00:08:09.847 00:08:09.847 Configuration options: 00:08:09.847 -c, --config, --json JSON config file 00:08:09.847 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:09.847 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:09.847 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:09.847 --rpcs-allowed comma-separated list of permitted RPCS 00:08:09.848 --json-ignore-init-errors don't exit on invalid config entry 00:08:09.848 00:08:09.848 Memory options: 00:08:09.848 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:09.848 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:09.848 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:09.848 -R, --huge-unlink unlink huge files after initialization 00:08:09.848 -n, --mem-channels number of memory channels used for DPDK 00:08:09.848 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:09.848 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:09.848 --no-huge run without using hugepages 00:08:09.848 -i, --shm-id shared memory ID (optional) 00:08:09.848 -g, --single-file-segments force creating just one hugetlbfs file 00:08:09.848 00:08:09.848 PCI options: 00:08:09.848 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:09.848 app_ut: invalid option -- 'z' 00:08:09.848 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:09.848 -u, --no-pci disable PCI access 00:08:09.848 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:09.848 00:08:09.848 Log options: 00:08:09.848 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:09.848 --silence-noticelog disable notice level logging to stderr 00:08:09.848 00:08:09.848 Trace options: 00:08:09.848 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:09.848 setting 0 to disable trace (default 32768) 00:08:09.848 Tracepoints vary in size and can use more than one trace entry. 00:08:09.848 -e, --tpoint-group [:] 00:08:09.848 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:09.848 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:09.848 a tracepoint group. First tpoint inside a group can be enabled by 00:08:09.848 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:09.848 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:09.848 in /include/spdk_internal/trace_defs.h 00:08:09.848 00:08:09.848 Other options: 00:08:09.848 -h, --help show this usage 00:08:09.848 -v, --version print SPDK version 00:08:09.848 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:09.848 --env-context Opaque context for use of the env implementation 00:08:09.848 app_ut [options] 00:08:09.848 00:08:09.848 CPU options: 00:08:09.848 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:09.848 (like [0,1,10]) 00:08:09.848 --lcores lcore to CPU mapping list. The list is in the format: 00:08:09.848 [<,lcores[@CPUs]>...] 00:08:09.848 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:09.848 Within the group, '-' is used for range separator, 00:08:09.848 ',' is used for single number separator. 00:08:09.848 '( )' can be omitted for single element group, 00:08:09.848 '@' can be omitted if cpus and lcores have the same value 00:08:09.848 --disable-cpumask-locks Disable CPU core lock files. 00:08:09.848 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:09.848 pollers in the app support interrupt mode) 00:08:09.848 -p, --main-core main (primary) core for DPDK 00:08:09.848 00:08:09.848 Configuration options: 00:08:09.848 -c, --config, --json JSON config file 00:08:09.848 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:09.848 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:09.848 --wait-for-rpc wait for RPCs to initialize subsystemsapp_ut: unrecognized option '--test-long-opt' 00:08:09.848 00:08:09.848 --rpcs-allowed comma-separated list of permitted RPCS 00:08:09.848 --json-ignore-init-errors don't exit on invalid config entry 00:08:09.848 00:08:09.848 Memory options: 00:08:09.848 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:09.848 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:09.848 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:09.848 -R, --huge-unlink unlink huge files after initialization 00:08:09.848 -n, --mem-channels number of memory channels used for DPDK 00:08:09.848 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:09.848 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:09.848 --no-huge run without using hugepages 00:08:09.848 -i, --shm-id shared memory ID (optional) 00:08:09.848 -g, --single-file-segments force creating just one hugetlbfs file 00:08:09.848 00:08:09.848 PCI options: 00:08:09.848 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:09.848 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:09.848 -u, --no-pci disable PCI access 00:08:09.848 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:09.848 00:08:09.848 Log options: 00:08:09.848 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:09.848 --silence-noticelog disable notice level logging to stderr 00:08:09.848 00:08:09.848 Trace options: 00:08:09.848 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:09.848 setting 0 to disable trace (default 32768) 00:08:09.848 Tracepoints vary in size and can use more than one trace entry. 00:08:09.848 -e, --tpoint-group [:] 00:08:09.848 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:09.848 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:09.848 a tracepoint group. First tpoint inside a group can be enabled by 00:08:09.848 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:09.848 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:09.848 in /include/spdk_internal/trace_defs.h 00:08:09.848 00:08:09.848 Other options: 00:08:09.848 -h, --help show this usage 00:08:09.848 -v, --version print SPDK version 00:08:09.848 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:09.848 --env-context Opaque context for use of the env implementation 00:08:09.848 [2024-07-25 00:34:32.390780] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:08:09.848 [2024-07-25 00:34:32.391283] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:08:09.848 app_ut [options] 00:08:09.848 00:08:09.848 CPU options: 00:08:09.848 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:09.848 (like [0,1,10]) 00:08:09.848 --lcores lcore to CPU mapping list. The list is in the format: 00:08:09.848 [<,lcores[@CPUs]>...] 00:08:09.848 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:09.848 Within the group, '-' is used for range separator, 00:08:09.848 ',' is used for single number separator. 00:08:09.848 '( )' can be omitted for single element group, 00:08:09.848 '@' can be omitted if cpus and lcores have the same value 00:08:09.848 --disable-cpumask-locks Disable CPU core lock files. 00:08:09.848 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:09.848 pollers in the app support interrupt mode) 00:08:09.848 -p, --main-core main (primary) core for DPDK 00:08:09.848 00:08:09.848 Configuration options: 00:08:09.848 -c, --config, --json JSON config file 00:08:09.848 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:09.848 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:09.848 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:09.848 --rpcs-allowed comma-separated list of permitted RPCS 00:08:09.848 --json-ignore-init-errors don't exit on invalid config entry 00:08:09.848 00:08:09.848 Memory options: 00:08:09.848 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:09.848 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:09.848 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:09.848 -R, --huge-unlink unlink huge files after initialization 00:08:09.848 -n, --mem-channels number of memory channels used for DPDK 00:08:09.848 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:09.848 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:09.848 --no-huge run without using hugepages 00:08:09.848 -i, --shm-id shared memory ID (optional) 00:08:09.848 -g, --single-file-segments force creating just one hugetlbfs file 00:08:09.848 00:08:09.848 PCI options: 00:08:09.848 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:09.848 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:09.848 -u, --no-pci disable PCI access 00:08:09.848 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:09.848 00:08:09.848 Log options: 00:08:09.848 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:09.848 --silence-noticelog disable notice level logging to stderr 00:08:09.848 00:08:09.848 Trace options: 00:08:09.848 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:09.848 setting 0 to disable trace (default 32768) 00:08:09.849 Tracepoints vary in size and can use more than one trace entry. 00:08:09.849 -e, --tpoint-group [:] 00:08:09.849 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:09.849 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:09.849 a tracepoint group. First tpoint inside a group can be enabled by 00:08:09.849 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:09.849 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:09.849 in /include/spdk_internal/trace_defs.h 00:08:09.849 00:08:09.849 Other options: 00:08:09.849 -h, --help show this usage 00:08:09.849 -v, --version print SPDK version 00:08:09.849 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:09.849 --env-context Opaque context for use of the env implementation 00:08:09.849 [2024-07-25 00:34:32.391709] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:08:09.849 passed 00:08:09.849 00:08:09.849 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.849 suites 1 1 n/a 0 0 00:08:09.849 tests 1 1 1 0 0 00:08:09.849 asserts 8 8 8 0 n/a 00:08:09.849 00:08:09.849 Elapsed time = 0.002 seconds 00:08:09.849 00:34:32 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:08:09.849 00:08:09.849 00:08:09.849 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.849 http://cunit.sourceforge.net/ 00:08:09.849 00:08:09.849 00:08:09.849 Suite: app_suite 00:08:09.849 Test: test_create_reactor ...passed 00:08:09.849 Test: test_init_reactors ...passed 00:08:09.849 Test: test_event_call ...passed 00:08:09.849 Test: test_schedule_thread ...passed 00:08:09.849 Test: test_reschedule_thread ...passed 00:08:09.849 Test: test_bind_thread ...passed 00:08:09.849 Test: test_for_each_reactor ...passed 00:08:09.849 Test: test_reactor_stats ...passed 00:08:09.849 Test: test_scheduler ...passed 00:08:09.849 Test: test_governor ...passed 00:08:09.849 00:08:09.849 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.849 suites 1 1 n/a 0 0 00:08:09.849 tests 10 10 10 0 0 00:08:09.849 asserts 344 344 344 0 n/a 00:08:09.849 00:08:09.849 Elapsed time = 0.015 seconds 00:08:09.849 00:08:09.849 real 0m0.104s 00:08:09.849 user 0m0.066s 00:08:09.849 sys 0m0.039s 00:08:09.849 00:34:32 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.849 00:34:32 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:08:09.849 ************************************ 00:08:09.849 END TEST unittest_event 00:08:09.849 ************************************ 00:08:10.107 00:34:32 unittest -- unit/unittest.sh@235 -- # uname -s 00:08:10.107 00:34:32 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:08:10.107 00:34:32 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:08:10.107 00:34:32 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:10.107 00:34:32 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.107 00:34:32 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:10.107 ************************************ 00:08:10.107 START TEST unittest_ftl 00:08:10.107 ************************************ 00:08:10.107 00:34:32 unittest.unittest_ftl -- common/autotest_common.sh@1123 -- # unittest_ftl 00:08:10.107 00:34:32 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:08:10.107 00:08:10.107 00:08:10.107 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.107 http://cunit.sourceforge.net/ 00:08:10.107 00:08:10.107 00:08:10.107 Suite: ftl_band_suite 00:08:10.107 Test: test_band_block_offset_from_addr_base ...passed 00:08:10.107 Test: test_band_block_offset_from_addr_offset ...passed 00:08:10.107 Test: test_band_addr_from_block_offset ...passed 00:08:10.107 Test: test_band_set_addr ...passed 00:08:10.107 Test: test_invalidate_addr ...passed 00:08:10.107 Test: test_next_xfer_addr ...passed 00:08:10.107 00:08:10.107 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.107 suites 1 1 n/a 0 0 00:08:10.107 tests 6 6 6 0 0 00:08:10.107 asserts 30356 30356 30356 0 n/a 00:08:10.107 00:08:10.107 Elapsed time = 0.186 seconds 00:08:10.365 00:34:32 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:08:10.365 00:08:10.365 00:08:10.365 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.365 http://cunit.sourceforge.net/ 00:08:10.365 00:08:10.365 00:08:10.365 Suite: ftl_bitmap 00:08:10.365 Test: test_ftl_bitmap_create ...[2024-07-25 00:34:32.856159] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:08:10.365 passed 00:08:10.365 Test: test_ftl_bitmap_get ...passed 00:08:10.365 Test: test_ftl_bitmap_set ...[2024-07-25 00:34:32.856463] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:08:10.365 passed 00:08:10.365 Test: test_ftl_bitmap_clear ...passed 00:08:10.365 Test: test_ftl_bitmap_find_first_set ...passed 00:08:10.365 Test: test_ftl_bitmap_find_first_clear ...passed 00:08:10.365 Test: test_ftl_bitmap_count_set ...passed 00:08:10.365 00:08:10.365 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.365 suites 1 1 n/a 0 0 00:08:10.365 tests 7 7 7 0 0 00:08:10.365 asserts 137 137 137 0 n/a 00:08:10.365 00:08:10.365 Elapsed time = 0.001 seconds 00:08:10.365 00:34:32 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:08:10.365 00:08:10.365 00:08:10.365 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.365 http://cunit.sourceforge.net/ 00:08:10.365 00:08:10.365 00:08:10.365 Suite: ftl_io_suite 00:08:10.365 Test: test_completion ...passed 00:08:10.365 Test: test_multiple_ios ...passed 00:08:10.365 00:08:10.365 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.365 suites 1 1 n/a 0 0 00:08:10.365 tests 2 2 2 0 0 00:08:10.365 asserts 47 47 47 0 n/a 00:08:10.365 00:08:10.365 Elapsed time = 0.003 seconds 00:08:10.365 00:34:32 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:08:10.365 00:08:10.365 00:08:10.365 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.365 http://cunit.sourceforge.net/ 00:08:10.365 00:08:10.365 00:08:10.365 Suite: ftl_mngt 00:08:10.365 Test: test_next_step ...passed 00:08:10.365 Test: test_continue_step ...passed 00:08:10.365 Test: test_get_func_and_step_cntx_alloc ...passed 00:08:10.365 Test: test_fail_step ...passed 00:08:10.365 Test: test_mngt_call_and_call_rollback ...passed 00:08:10.365 Test: test_nested_process_failure ...passed 00:08:10.365 Test: test_call_init_success ...passed 00:08:10.365 Test: test_call_init_failure ...passed 00:08:10.365 00:08:10.365 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.365 suites 1 1 n/a 0 0 00:08:10.365 tests 8 8 8 0 0 00:08:10.365 asserts 196 196 196 0 n/a 00:08:10.365 00:08:10.365 Elapsed time = 0.002 seconds 00:08:10.365 00:34:32 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:08:10.365 00:08:10.365 00:08:10.365 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.365 http://cunit.sourceforge.net/ 00:08:10.365 00:08:10.365 00:08:10.365 Suite: ftl_mempool 00:08:10.365 Test: test_ftl_mempool_create ...passed 00:08:10.365 Test: test_ftl_mempool_get_put ...passed 00:08:10.365 00:08:10.365 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.365 suites 1 1 n/a 0 0 00:08:10.365 tests 2 2 2 0 0 00:08:10.365 asserts 36 36 36 0 n/a 00:08:10.365 00:08:10.365 Elapsed time = 0.000 seconds 00:08:10.365 00:34:32 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:08:10.365 00:08:10.365 00:08:10.365 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.365 http://cunit.sourceforge.net/ 00:08:10.365 00:08:10.365 00:08:10.365 Suite: ftl_addr64_suite 00:08:10.365 Test: test_addr_cached ...passed 00:08:10.365 00:08:10.365 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.365 suites 1 1 n/a 0 0 00:08:10.365 tests 1 1 1 0 0 00:08:10.365 asserts 1536 1536 1536 0 n/a 00:08:10.365 00:08:10.365 Elapsed time = 0.000 seconds 00:08:10.624 00:34:33 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:08:10.624 00:08:10.624 00:08:10.624 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.624 http://cunit.sourceforge.net/ 00:08:10.624 00:08:10.624 00:08:10.624 Suite: ftl_sb 00:08:10.624 Test: test_sb_crc_v2 ...passed 00:08:10.624 Test: test_sb_crc_v3 ...passed 00:08:10.624 Test: test_sb_v3_md_layout ...[2024-07-25 00:34:33.045746] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:08:10.624 [2024-07-25 00:34:33.046711] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:10.624 [2024-07-25 00:34:33.046786] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:10.624 [2024-07-25 00:34:33.046840] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:10.624 [2024-07-25 00:34:33.047016] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:10.624 [2024-07-25 00:34:33.047308] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:08:10.624 [2024-07-25 00:34:33.047428] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:10.624 [2024-07-25 00:34:33.047741] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:10.624 [2024-07-25 00:34:33.047854] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:10.624 [2024-07-25 00:34:33.048166] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:10.624 passed 00:08:10.624 Test: test_sb_v5_md_layout ...[2024-07-25 00:34:33.048231] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:10.624 passed 00:08:10.624 00:08:10.624 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.624 suites 1 1 n/a 0 0 00:08:10.624 tests 4 4 4 0 0 00:08:10.624 asserts 160 160 160 0 n/a 00:08:10.624 00:08:10.624 Elapsed time = 0.004 seconds 00:08:10.624 00:34:33 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:08:10.624 00:08:10.624 00:08:10.624 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.624 http://cunit.sourceforge.net/ 00:08:10.624 00:08:10.624 00:08:10.624 Suite: ftl_layout_upgrade 00:08:10.624 Test: test_l2p_upgrade ...passed 00:08:10.624 00:08:10.624 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.624 suites 1 1 n/a 0 0 00:08:10.624 tests 1 1 1 0 0 00:08:10.624 asserts 152 152 152 0 n/a 00:08:10.624 00:08:10.624 Elapsed time = 0.001 seconds 00:08:10.624 00:34:33 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:08:10.624 00:08:10.624 00:08:10.624 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.624 http://cunit.sourceforge.net/ 00:08:10.624 00:08:10.624 00:08:10.624 Suite: ftl_p2l_suite 00:08:10.624 Test: test_p2l_num_pages ...passed 00:08:11.191 Test: test_ckpt_issue ...passed 00:08:11.756 Test: test_persist_band_p2l ...passed 00:08:12.687 Test: test_clean_restore_p2l ...passed 00:08:13.621 Test: test_dirty_restore_p2l ...passed 00:08:13.621 00:08:13.621 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.621 suites 1 1 n/a 0 0 00:08:13.621 tests 5 5 5 0 0 00:08:13.621 asserts 10020 10020 10020 0 n/a 00:08:13.621 00:08:13.621 Elapsed time = 2.953 seconds 00:08:13.621 00:08:13.621 real 0m3.573s 00:08:13.621 user 0m1.199s 00:08:13.621 sys 0m2.378s 00:08:13.621 00:34:36 unittest.unittest_ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.621 00:34:36 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:08:13.621 ************************************ 00:08:13.621 END TEST unittest_ftl 00:08:13.621 ************************************ 00:08:13.621 00:34:36 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:13.621 00:34:36 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.621 00:34:36 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.621 00:34:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:13.621 ************************************ 00:08:13.621 START TEST unittest_accel 00:08:13.621 ************************************ 00:08:13.621 00:34:36 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:13.621 00:08:13.621 00:08:13.621 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.621 http://cunit.sourceforge.net/ 00:08:13.621 00:08:13.621 00:08:13.621 Suite: accel_sequence 00:08:13.621 Test: test_sequence_fill_copy ...passed 00:08:13.621 Test: test_sequence_abort ...passed 00:08:13.621 Test: test_sequence_append_error ...passed 00:08:13.621 Test: test_sequence_completion_error ...[2024-07-25 00:34:36.216522] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fbd9e3427c0 00:08:13.621 [2024-07-25 00:34:36.216928] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fbd9e3427c0 00:08:13.621 [2024-07-25 00:34:36.217076] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fbd9e3427c0 00:08:13.621 [2024-07-25 00:34:36.217159] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fbd9e3427c0 00:08:13.621 passed 00:08:13.621 Test: test_sequence_decompress ...passed 00:08:13.621 Test: test_sequence_reverse ...passed 00:08:13.621 Test: test_sequence_copy_elision ...passed 00:08:13.621 Test: test_sequence_accel_buffers ...passed 00:08:13.621 Test: test_sequence_memory_domain ...[2024-07-25 00:34:36.230224] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1761:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:08:13.621 [2024-07-25 00:34:36.230449] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1800:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:08:13.621 passed 00:08:13.621 Test: test_sequence_module_memory_domain ...passed 00:08:13.621 Test: test_sequence_crypto ...passed 00:08:13.621 Test: test_sequence_driver ...[2024-07-25 00:34:36.238177] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1908:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fbd9d42f7c0 using driver: ut 00:08:13.621 [2024-07-25 00:34:36.238331] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1972:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fbd9d42f7c0 through driver: ut 00:08:13.621 passed 00:08:13.621 Test: test_sequence_same_iovs ...passed 00:08:13.621 Test: test_sequence_crc32 ...passed 00:08:13.621 Suite: accel 00:08:13.621 Test: test_spdk_accel_task_complete ...passed 00:08:13.621 Test: test_get_task ...passed 00:08:13.621 Test: test_spdk_accel_submit_copy ...passed 00:08:13.621 Test: test_spdk_accel_submit_dualcast ...[2024-07-25 00:34:36.244128] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:13.621 [2024-07-25 00:34:36.244203] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:13.621 passed 00:08:13.621 Test: test_spdk_accel_submit_compare ...passed 00:08:13.621 Test: test_spdk_accel_submit_fill ...passed 00:08:13.621 Test: test_spdk_accel_submit_crc32c ...passed 00:08:13.621 Test: test_spdk_accel_submit_crc32cv ...passed 00:08:13.621 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:08:13.621 Test: test_spdk_accel_submit_xor ...passed 00:08:13.621 Test: test_spdk_accel_module_find_by_name ...passed 00:08:13.621 Test: test_spdk_accel_module_register ...passed 00:08:13.621 00:08:13.621 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.621 suites 2 2 n/a 0 0 00:08:13.621 tests 26 26 26 0 0 00:08:13.621 asserts 830 830 830 0 n/a 00:08:13.621 00:08:13.621 Elapsed time = 0.041 seconds 00:08:13.621 00:08:13.621 real 0m0.089s 00:08:13.621 user 0m0.041s 00:08:13.621 sys 0m0.049s 00:08:13.621 00:34:36 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.621 00:34:36 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.621 ************************************ 00:08:13.621 END TEST unittest_accel 00:08:13.621 ************************************ 00:08:13.880 00:34:36 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:13.880 00:34:36 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.880 00:34:36 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.880 00:34:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:13.880 ************************************ 00:08:13.880 START TEST unittest_ioat 00:08:13.880 ************************************ 00:08:13.880 00:34:36 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:13.880 00:08:13.880 00:08:13.880 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.880 http://cunit.sourceforge.net/ 00:08:13.880 00:08:13.880 00:08:13.880 Suite: ioat 00:08:13.880 Test: ioat_state_check ...passed 00:08:13.880 00:08:13.880 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.880 suites 1 1 n/a 0 0 00:08:13.880 tests 1 1 1 0 0 00:08:13.880 asserts 32 32 32 0 n/a 00:08:13.880 00:08:13.880 Elapsed time = 0.000 seconds 00:08:13.880 00:08:13.880 real 0m0.039s 00:08:13.880 user 0m0.028s 00:08:13.880 sys 0m0.012s 00:08:13.880 00:34:36 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.880 00:34:36 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:08:13.880 ************************************ 00:08:13.880 END TEST unittest_ioat 00:08:13.880 ************************************ 00:08:13.880 00:34:36 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:13.880 00:34:36 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:13.880 00:34:36 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.880 00:34:36 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.880 00:34:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:13.880 ************************************ 00:08:13.880 START TEST unittest_idxd_user 00:08:13.880 ************************************ 00:08:13.880 00:34:36 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:13.880 00:08:13.880 00:08:13.880 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.880 http://cunit.sourceforge.net/ 00:08:13.880 00:08:13.880 00:08:13.880 Suite: idxd_user 00:08:13.880 Test: test_idxd_wait_cmd ...[2024-07-25 00:34:36.461394] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:13.880 [2024-07-25 00:34:36.461723] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:08:13.880 passed 00:08:13.880 Test: test_idxd_reset_dev ...[2024-07-25 00:34:36.461892] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:13.880 passed 00:08:13.880 Test: test_idxd_group_config ...passed 00:08:13.880 Test: test_idxd_wq_config ...passed 00:08:13.880 00:08:13.880 [2024-07-25 00:34:36.461958] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:08:13.880 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.880 suites 1 1 n/a 0 0 00:08:13.880 tests 4 4 4 0 0 00:08:13.880 asserts 20 20 20 0 n/a 00:08:13.880 00:08:13.880 Elapsed time = 0.001 seconds 00:08:13.880 00:08:13.880 real 0m0.043s 00:08:13.880 user 0m0.024s 00:08:13.880 sys 0m0.020s 00:08:13.880 00:34:36 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.880 00:34:36 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:08:13.880 ************************************ 00:08:13.880 END TEST unittest_idxd_user 00:08:13.880 ************************************ 00:08:14.138 00:34:36 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:08:14.138 00:34:36 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.138 00:34:36 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.138 00:34:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:14.138 ************************************ 00:08:14.138 START TEST unittest_iscsi 00:08:14.138 ************************************ 00:08:14.138 00:34:36 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:08:14.138 00:34:36 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:08:14.138 00:08:14.138 00:08:14.138 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.138 http://cunit.sourceforge.net/ 00:08:14.138 00:08:14.138 00:08:14.138 Suite: conn_suite 00:08:14.138 Test: read_task_split_in_order_case ...passed 00:08:14.138 Test: read_task_split_reverse_order_case ...passed 00:08:14.138 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:08:14.138 Test: process_non_read_task_completion_test ...passed 00:08:14.138 Test: free_tasks_on_connection ...passed 00:08:14.138 Test: free_tasks_with_queued_datain ...passed 00:08:14.138 Test: abort_queued_datain_task_test ...passed 00:08:14.138 Test: abort_queued_datain_tasks_test ...passed 00:08:14.138 00:08:14.138 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.138 suites 1 1 n/a 0 0 00:08:14.138 tests 8 8 8 0 0 00:08:14.138 asserts 230 230 230 0 n/a 00:08:14.138 00:08:14.138 Elapsed time = 0.000 seconds 00:08:14.138 00:34:36 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:08:14.138 00:08:14.138 00:08:14.139 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.139 http://cunit.sourceforge.net/ 00:08:14.139 00:08:14.139 00:08:14.139 Suite: iscsi_suite 00:08:14.139 Test: param_negotiation_test ...passed 00:08:14.139 Test: list_negotiation_test ...passed 00:08:14.139 Test: parse_valid_test ...passed 00:08:14.139 Test: parse_invalid_test ...[2024-07-25 00:34:36.613072] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:14.139 [2024-07-25 00:34:36.613486] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:14.139 [2024-07-25 00:34:36.613586] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:08:14.139 [2024-07-25 00:34:36.613711] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:08:14.139 [2024-07-25 00:34:36.613945] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:08:14.139 [2024-07-25 00:34:36.614058] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:08:14.139 [2024-07-25 00:34:36.614306] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:08:14.139 passed 00:08:14.139 00:08:14.139 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.139 suites 1 1 n/a 0 0 00:08:14.139 tests 4 4 4 0 0 00:08:14.139 asserts 161 161 161 0 n/a 00:08:14.139 00:08:14.139 Elapsed time = 0.006 seconds 00:08:14.139 00:34:36 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:08:14.139 00:08:14.139 00:08:14.139 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.139 http://cunit.sourceforge.net/ 00:08:14.139 00:08:14.139 00:08:14.139 Suite: iscsi_target_node_suite 00:08:14.139 Test: add_lun_test_cases ...[2024-07-25 00:34:36.655245] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:08:14.139 [2024-07-25 00:34:36.655559] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:08:14.139 [2024-07-25 00:34:36.655656] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:14.139 [2024-07-25 00:34:36.655699] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:14.139 passed 00:08:14.139 Test: allow_any_allowed ...passed 00:08:14.139 Test: allow_ipv6_allowed ...passed 00:08:14.139 Test: allow_ipv6_denied ...passed 00:08:14.139 Test: allow_ipv6_invalid ...passed 00:08:14.139 Test: allow_ipv4_allowed ...passed 00:08:14.139 Test: allow_ipv4_denied ...passed 00:08:14.139 Test: allow_ipv4_invalid ...passed 00:08:14.139 Test: node_access_allowed ...[2024-07-25 00:34:36.655739] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:08:14.139 passed 00:08:14.139 Test: node_access_denied_by_empty_netmask ...passed 00:08:14.139 Test: node_access_multi_initiator_groups_cases ...passed 00:08:14.139 Test: allow_iscsi_name_multi_maps_case ...passed 00:08:14.139 Test: chap_param_test_cases ...[2024-07-25 00:34:36.656182] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:08:14.139 [2024-07-25 00:34:36.656229] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:08:14.139 [2024-07-25 00:34:36.656301] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:08:14.139 passed 00:08:14.139 00:08:14.139 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.139 suites 1 1 n/a 0 0 00:08:14.139 tests 13 13 13 0 0 00:08:14.139 asserts 50 50 50 0 n/a 00:08:14.139 00:08:14.139 Elapsed time = 0.001 seconds 00:08:14.139 [2024-07-25 00:34:36.656342] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:08:14.139 [2024-07-25 00:34:36.656392] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:08:14.139 00:34:36 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:08:14.139 00:08:14.139 00:08:14.139 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.139 http://cunit.sourceforge.net/ 00:08:14.139 00:08:14.139 00:08:14.139 Suite: iscsi_suite 00:08:14.139 Test: op_login_check_target_test ...[2024-07-25 00:34:36.700108] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:08:14.139 passed 00:08:14.139 Test: op_login_session_normal_test ...[2024-07-25 00:34:36.700582] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:14.139 [2024-07-25 00:34:36.700672] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:14.139 [2024-07-25 00:34:36.700760] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:14.139 [2024-07-25 00:34:36.700844] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:08:14.139 [2024-07-25 00:34:36.701002] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:14.139 [2024-07-25 00:34:36.701175] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:08:14.139 [2024-07-25 00:34:36.701281] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:14.139 passed 00:08:14.139 Test: maxburstlength_test ...[2024-07-25 00:34:36.701660] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:14.139 [2024-07-25 00:34:36.701771] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:08:14.139 passed 00:08:14.139 Test: underflow_for_read_transfer_test ...passed 00:08:14.139 Test: underflow_for_zero_read_transfer_test ...passed 00:08:14.139 Test: underflow_for_request_sense_test ...passed 00:08:14.139 Test: underflow_for_check_condition_test ...passed 00:08:14.139 Test: add_transfer_task_test ...passed 00:08:14.139 Test: get_transfer_task_test ...passed 00:08:14.139 Test: del_transfer_task_test ...passed 00:08:14.139 Test: clear_all_transfer_tasks_test ...passed 00:08:14.139 Test: build_iovs_test ...passed 00:08:14.139 Test: build_iovs_with_md_test ...passed 00:08:14.139 Test: pdu_hdr_op_login_test ...[2024-07-25 00:34:36.703915] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:08:14.139 [2024-07-25 00:34:36.704108] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:08:14.139 [2024-07-25 00:34:36.704249] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:08:14.139 passed 00:08:14.139 Test: pdu_hdr_op_text_test ...[2024-07-25 00:34:36.704465] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:14.139 [2024-07-25 00:34:36.704605] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:08:14.139 [2024-07-25 00:34:36.704674] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:08:14.139 passed 00:08:14.139 Test: pdu_hdr_op_logout_test ...[2024-07-25 00:34:36.704826] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:08:14.139 passed 00:08:14.139 Test: pdu_hdr_op_scsi_test ...[2024-07-25 00:34:36.705089] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:14.139 [2024-07-25 00:34:36.705163] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:14.139 [2024-07-25 00:34:36.705239] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:08:14.139 [2024-07-25 00:34:36.705404] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:14.139 [2024-07-25 00:34:36.705558] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:08:14.139 [2024-07-25 00:34:36.705833] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:14.139 passed 00:08:14.139 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-25 00:34:36.705986] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:08:14.139 [2024-07-25 00:34:36.706126] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:08:14.139 passed 00:08:14.139 Test: pdu_hdr_op_nopout_test ...[2024-07-25 00:34:36.706477] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:08:14.139 [2024-07-25 00:34:36.706661] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:14.139 [2024-07-25 00:34:36.706742] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:14.139 [2024-07-25 00:34:36.706809] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:08:14.139 passed 00:08:14.139 Test: pdu_hdr_op_data_test ...[2024-07-25 00:34:36.706898] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:08:14.139 [2024-07-25 00:34:36.707015] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:14.139 [2024-07-25 00:34:36.707119] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:14.139 [2024-07-25 00:34:36.707224] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:08:14.139 [2024-07-25 00:34:36.707324] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:08:14.139 [2024-07-25 00:34:36.707475] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:08:14.139 [2024-07-25 00:34:36.707549] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:08:14.139 passed 00:08:14.139 Test: empty_text_with_cbit_test ...passed 00:08:14.139 Test: pdu_payload_read_test ...[2024-07-25 00:34:36.709919] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:08:14.139 passed 00:08:14.139 Test: data_out_pdu_sequence_test ...passed 00:08:14.139 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:08:14.139 00:08:14.139 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.139 suites 1 1 n/a 0 0 00:08:14.139 tests 24 24 24 0 0 00:08:14.139 asserts 150253 150253 150253 0 n/a 00:08:14.139 00:08:14.139 Elapsed time = 0.020 seconds 00:08:14.139 00:34:36 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:08:14.139 00:08:14.139 00:08:14.139 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.140 http://cunit.sourceforge.net/ 00:08:14.140 00:08:14.140 00:08:14.140 Suite: init_grp_suite 00:08:14.140 Test: create_initiator_group_success_case ...passed 00:08:14.140 Test: find_initiator_group_success_case ...passed 00:08:14.140 Test: register_initiator_group_twice_case ...passed 00:08:14.140 Test: add_initiator_name_success_case ...passed 00:08:14.140 Test: add_initiator_name_fail_case ...[2024-07-25 00:34:36.757326] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:08:14.140 passed 00:08:14.140 Test: delete_all_initiator_names_success_case ...passed 00:08:14.140 Test: add_netmask_success_case ...passed 00:08:14.140 Test: add_netmask_fail_case ...[2024-07-25 00:34:36.757800] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:08:14.140 passed 00:08:14.140 Test: delete_all_netmasks_success_case ...passed 00:08:14.140 Test: initiator_name_overwrite_all_to_any_case ...passed 00:08:14.140 Test: netmask_overwrite_all_to_any_case ...passed 00:08:14.140 Test: add_delete_initiator_names_case ...passed 00:08:14.140 Test: add_duplicated_initiator_names_case ...passed 00:08:14.140 Test: delete_nonexisting_initiator_names_case ...passed 00:08:14.140 Test: add_delete_netmasks_case ...passed 00:08:14.140 Test: add_duplicated_netmasks_case ...passed 00:08:14.140 Test: delete_nonexisting_netmasks_case ...passed 00:08:14.140 00:08:14.140 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.140 suites 1 1 n/a 0 0 00:08:14.140 tests 17 17 17 0 0 00:08:14.140 asserts 108 108 108 0 n/a 00:08:14.140 00:08:14.140 Elapsed time = 0.001 seconds 00:08:14.140 00:34:36 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:08:14.398 00:08:14.398 00:08:14.398 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.398 http://cunit.sourceforge.net/ 00:08:14.398 00:08:14.398 00:08:14.398 Suite: portal_grp_suite 00:08:14.398 Test: portal_create_ipv4_normal_case ...passed 00:08:14.398 Test: portal_create_ipv6_normal_case ...passed 00:08:14.398 Test: portal_create_ipv4_wildcard_case ...passed 00:08:14.398 Test: portal_create_ipv6_wildcard_case ...passed 00:08:14.398 Test: portal_create_twice_case ...[2024-07-25 00:34:36.804847] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:08:14.398 passed 00:08:14.398 Test: portal_grp_register_unregister_case ...passed 00:08:14.398 Test: portal_grp_register_twice_case ...passed 00:08:14.398 Test: portal_grp_add_delete_case ...passed 00:08:14.398 Test: portal_grp_add_delete_twice_case ...passed 00:08:14.398 00:08:14.398 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.398 suites 1 1 n/a 0 0 00:08:14.398 tests 9 9 9 0 0 00:08:14.398 asserts 44 44 44 0 n/a 00:08:14.398 00:08:14.398 Elapsed time = 0.004 seconds 00:08:14.398 00:08:14.398 real 0m0.279s 00:08:14.398 user 0m0.155s 00:08:14.398 sys 0m0.126s 00:08:14.398 00:34:36 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.398 00:34:36 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:08:14.398 ************************************ 00:08:14.398 END TEST unittest_iscsi 00:08:14.398 ************************************ 00:08:14.398 00:34:36 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:08:14.398 00:34:36 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.398 00:34:36 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.398 00:34:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:14.398 ************************************ 00:08:14.398 START TEST unittest_json 00:08:14.398 ************************************ 00:08:14.398 00:34:36 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:08:14.398 00:34:36 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:08:14.398 00:08:14.398 00:08:14.398 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.398 http://cunit.sourceforge.net/ 00:08:14.398 00:08:14.398 00:08:14.398 Suite: json 00:08:14.398 Test: test_parse_literal ...passed 00:08:14.398 Test: test_parse_string_simple ...passed 00:08:14.398 Test: test_parse_string_control_chars ...passed 00:08:14.398 Test: test_parse_string_utf8 ...passed 00:08:14.398 Test: test_parse_string_escapes_twochar ...passed 00:08:14.398 Test: test_parse_string_escapes_unicode ...passed 00:08:14.398 Test: test_parse_number ...passed 00:08:14.398 Test: test_parse_array ...passed 00:08:14.398 Test: test_parse_object ...passed 00:08:14.398 Test: test_parse_nesting ...passed 00:08:14.398 Test: test_parse_comment ...passed 00:08:14.398 00:08:14.398 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.398 suites 1 1 n/a 0 0 00:08:14.398 tests 11 11 11 0 0 00:08:14.398 asserts 1516 1516 1516 0 n/a 00:08:14.398 00:08:14.398 Elapsed time = 0.002 seconds 00:08:14.398 00:34:36 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:08:14.398 00:08:14.398 00:08:14.398 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.399 http://cunit.sourceforge.net/ 00:08:14.399 00:08:14.399 00:08:14.399 Suite: json 00:08:14.399 Test: test_strequal ...passed 00:08:14.399 Test: test_num_to_uint16 ...passed 00:08:14.399 Test: test_num_to_int32 ...passed 00:08:14.399 Test: test_num_to_uint64 ...passed 00:08:14.399 Test: test_decode_object ...passed 00:08:14.399 Test: test_decode_array ...passed 00:08:14.399 Test: test_decode_bool ...passed 00:08:14.399 Test: test_decode_uint16 ...passed 00:08:14.399 Test: test_decode_int32 ...passed 00:08:14.399 Test: test_decode_uint32 ...passed 00:08:14.399 Test: test_decode_uint64 ...passed 00:08:14.399 Test: test_decode_string ...passed 00:08:14.399 Test: test_decode_uuid ...passed 00:08:14.399 Test: test_find ...passed 00:08:14.399 Test: test_find_array ...passed 00:08:14.399 Test: test_iterating ...passed 00:08:14.399 Test: test_free_object ...passed 00:08:14.399 00:08:14.399 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.399 suites 1 1 n/a 0 0 00:08:14.399 tests 17 17 17 0 0 00:08:14.399 asserts 236 236 236 0 n/a 00:08:14.399 00:08:14.399 Elapsed time = 0.001 seconds 00:08:14.399 00:34:36 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:08:14.399 00:08:14.399 00:08:14.399 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.399 http://cunit.sourceforge.net/ 00:08:14.399 00:08:14.399 00:08:14.399 Suite: json 00:08:14.399 Test: test_write_literal ...passed 00:08:14.399 Test: test_write_string_simple ...passed 00:08:14.399 Test: test_write_string_escapes ...passed 00:08:14.399 Test: test_write_string_utf16le ...passed 00:08:14.399 Test: test_write_number_int32 ...passed 00:08:14.399 Test: test_write_number_uint32 ...passed 00:08:14.399 Test: test_write_number_uint128 ...passed 00:08:14.399 Test: test_write_string_number_uint128 ...passed 00:08:14.399 Test: test_write_number_int64 ...passed 00:08:14.399 Test: test_write_number_uint64 ...passed 00:08:14.399 Test: test_write_number_double ...passed 00:08:14.399 Test: test_write_uuid ...passed 00:08:14.399 Test: test_write_array ...passed 00:08:14.399 Test: test_write_object ...passed 00:08:14.399 Test: test_write_nesting ...passed 00:08:14.399 Test: test_write_val ...passed 00:08:14.399 00:08:14.399 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.399 suites 1 1 n/a 0 0 00:08:14.399 tests 16 16 16 0 0 00:08:14.399 asserts 918 918 918 0 n/a 00:08:14.399 00:08:14.399 Elapsed time = 0.007 seconds 00:08:14.399 00:34:37 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:08:14.399 00:08:14.399 00:08:14.399 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.399 http://cunit.sourceforge.net/ 00:08:14.399 00:08:14.399 00:08:14.399 Suite: jsonrpc 00:08:14.399 Test: test_parse_request ...passed 00:08:14.399 Test: test_parse_request_streaming ...passed 00:08:14.399 00:08:14.399 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.399 suites 1 1 n/a 0 0 00:08:14.399 tests 2 2 2 0 0 00:08:14.399 asserts 289 289 289 0 n/a 00:08:14.399 00:08:14.399 Elapsed time = 0.004 seconds 00:08:14.657 00:08:14.657 real 0m0.175s 00:08:14.657 user 0m0.082s 00:08:14.657 sys 0m0.094s 00:08:14.657 00:34:37 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.657 00:34:37 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:08:14.657 ************************************ 00:08:14.657 END TEST unittest_json 00:08:14.657 ************************************ 00:08:14.657 00:34:37 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:08:14.657 00:34:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.657 00:34:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.657 00:34:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:14.657 ************************************ 00:08:14.657 START TEST unittest_rpc 00:08:14.657 ************************************ 00:08:14.657 00:34:37 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:08:14.657 00:34:37 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:08:14.657 00:08:14.657 00:08:14.657 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.657 http://cunit.sourceforge.net/ 00:08:14.657 00:08:14.657 00:08:14.657 Suite: rpc 00:08:14.657 Test: test_jsonrpc_handler ...passed 00:08:14.657 Test: test_spdk_rpc_is_method_allowed ...passed 00:08:14.657 Test: test_rpc_get_methods ...passed 00:08:14.657 Test: test_rpc_spdk_get_version ...passed 00:08:14.657 Test: test_spdk_rpc_listen_close ...[2024-07-25 00:34:37.151739] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:08:14.657 passed 00:08:14.657 Test: test_rpc_run_multiple_servers ...passed 00:08:14.657 00:08:14.657 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.657 suites 1 1 n/a 0 0 00:08:14.657 tests 6 6 6 0 0 00:08:14.657 asserts 23 23 23 0 n/a 00:08:14.657 00:08:14.657 Elapsed time = 0.001 seconds 00:08:14.657 00:08:14.657 real 0m0.041s 00:08:14.657 user 0m0.021s 00:08:14.657 sys 0m0.021s 00:08:14.657 00:34:37 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.657 00:34:37 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.657 ************************************ 00:08:14.657 END TEST unittest_rpc 00:08:14.657 ************************************ 00:08:14.657 00:34:37 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:14.658 00:34:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.658 00:34:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.658 00:34:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:14.658 ************************************ 00:08:14.658 START TEST unittest_notify 00:08:14.658 ************************************ 00:08:14.658 00:34:37 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:14.658 00:08:14.658 00:08:14.658 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.658 http://cunit.sourceforge.net/ 00:08:14.658 00:08:14.658 00:08:14.658 Suite: app_suite 00:08:14.658 Test: notify ...passed 00:08:14.658 00:08:14.658 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.658 suites 1 1 n/a 0 0 00:08:14.658 tests 1 1 1 0 0 00:08:14.658 asserts 13 13 13 0 n/a 00:08:14.658 00:08:14.658 Elapsed time = 0.000 seconds 00:08:14.658 00:08:14.658 real 0m0.044s 00:08:14.658 user 0m0.024s 00:08:14.658 sys 0m0.020s 00:08:14.658 00:34:37 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.658 00:34:37 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:08:14.658 ************************************ 00:08:14.658 END TEST unittest_notify 00:08:14.658 ************************************ 00:08:14.916 00:34:37 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:08:14.916 00:34:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.916 00:34:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.916 00:34:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:14.916 ************************************ 00:08:14.916 START TEST unittest_nvme 00:08:14.916 ************************************ 00:08:14.916 00:34:37 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:08:14.916 00:34:37 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:08:14.916 00:08:14.916 00:08:14.916 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.916 http://cunit.sourceforge.net/ 00:08:14.916 00:08:14.916 00:08:14.916 Suite: nvme 00:08:14.916 Test: test_opc_data_transfer ...passed 00:08:14.917 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:08:14.917 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:08:14.917 Test: test_trid_parse_and_compare ...[2024-07-25 00:34:37.363084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:08:14.917 [2024-07-25 00:34:37.363411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:14.917 [2024-07-25 00:34:37.363520] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1211:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:08:14.917 [2024-07-25 00:34:37.363571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:14.917 [2024-07-25 00:34:37.363613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:08:14.917 [2024-07-25 00:34:37.363713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:14.917 passed 00:08:14.917 Test: test_trid_trtype_str ...passed 00:08:14.917 Test: test_trid_adrfam_str ...passed 00:08:14.917 Test: test_nvme_ctrlr_probe ...passed 00:08:14.917 Test: test_spdk_nvme_probe ...[2024-07-25 00:34:37.363956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:14.917 [2024-07-25 00:34:37.364060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:14.917 [2024-07-25 00:34:37.364101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:14.917 [2024-07-25 00:34:37.364208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:08:14.917 passed 00:08:14.917 Test: test_spdk_nvme_connect ...[2024-07-25 00:34:37.364257] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:14.917 [2024-07-25 00:34:37.364357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:08:14.917 [2024-07-25 00:34:37.364793] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:14.917 passed 00:08:14.917 Test: test_nvme_ctrlr_probe_internal ...[2024-07-25 00:34:37.364959] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:14.917 [2024-07-25 00:34:37.365001] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:08:14.917 passed 00:08:14.917 Test: test_nvme_init_controllers ...[2024-07-25 00:34:37.365099] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:08:14.917 passed 00:08:14.917 Test: test_nvme_driver_init ...[2024-07-25 00:34:37.365207] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:08:14.917 [2024-07-25 00:34:37.365257] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:14.917 [2024-07-25 00:34:37.474796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:08:14.917 [2024-07-25 00:34:37.475111] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:08:14.917 passed 00:08:14.917 Test: test_spdk_nvme_detach ...passed 00:08:14.917 Test: test_nvme_completion_poll_cb ...passed 00:08:14.917 Test: test_nvme_user_copy_cmd_complete ...passed 00:08:14.917 Test: test_nvme_allocate_request_null ...passed 00:08:14.917 Test: test_nvme_allocate_request ...passed 00:08:14.917 Test: test_nvme_free_request ...passed 00:08:14.917 Test: test_nvme_allocate_request_user_copy ...passed 00:08:14.917 Test: test_nvme_robust_mutex_init_shared ...passed 00:08:14.917 Test: test_nvme_request_check_timeout ...passed 00:08:14.917 Test: test_nvme_wait_for_completion ...passed 00:08:14.917 Test: test_spdk_nvme_parse_func ...passed 00:08:14.917 Test: test_spdk_nvme_detach_async ...passed 00:08:14.917 Test: test_nvme_parse_addr ...[2024-07-25 00:34:37.476628] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1635:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:08:14.917 passed 00:08:14.917 00:08:14.917 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.917 suites 1 1 n/a 0 0 00:08:14.917 tests 25 25 25 0 0 00:08:14.917 asserts 326 326 326 0 n/a 00:08:14.917 00:08:14.917 Elapsed time = 0.007 seconds 00:08:14.917 00:34:37 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:08:14.917 00:08:14.917 00:08:14.917 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.917 http://cunit.sourceforge.net/ 00:08:14.917 00:08:14.917 00:08:14.917 Suite: nvme_ctrlr 00:08:14.917 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-25 00:34:37.521891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 passed 00:08:14.917 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-25 00:34:37.523885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 passed 00:08:14.917 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-25 00:34:37.525159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 passed 00:08:14.917 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-25 00:34:37.526388] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 passed 00:08:14.917 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-25 00:34:37.527634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 [2024-07-25 00:34:37.528792] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 00:34:37.529990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 00:34:37.531134] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:14.917 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-25 00:34:37.533460] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 [2024-07-25 00:34:37.535689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 00:34:37.536844] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:14.917 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-25 00:34:37.539171] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 [2024-07-25 00:34:37.540344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 00:34:37.542656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:14.917 Test: test_nvme_ctrlr_init_delay ...[2024-07-25 00:34:37.545079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 passed 00:08:14.917 Test: test_alloc_io_qpair_rr_1 ...[2024-07-25 00:34:37.546382] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 [2024-07-25 00:34:37.546606] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:14.917 [2024-07-25 00:34:37.546814] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:14.917 [2024-07-25 00:34:37.546899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:14.917 [2024-07-25 00:34:37.546960] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:14.917 passed 00:08:14.917 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:08:14.917 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:08:14.917 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-25 00:34:37.547113] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 passed 00:08:14.917 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-25 00:34:37.547344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 [2024-07-25 00:34:37.547504] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:14.917 passed 00:08:14.917 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-25 00:34:37.547844] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:08:14.917 [2024-07-25 00:34:37.548025] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:14.917 [2024-07-25 00:34:37.548158] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:08:14.917 passed 00:08:14.917 Test: test_nvme_ctrlr_fail ...[2024-07-25 00:34:37.548255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:14.917 [2024-07-25 00:34:37.548347] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:08:14.917 passed 00:08:14.917 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:08:14.917 Test: test_nvme_ctrlr_set_supported_features ...passed 00:08:14.917 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-25 00:34:37.548518] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:14.917 passed 00:08:14.917 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:08:14.917 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-25 00:34:37.549895] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.484 passed 00:08:15.484 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:08:15.484 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:08:15.484 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:08:15.484 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-25 00:34:37.889086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.484 passed 00:08:15.484 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-25 00:34:37.896075] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.484 passed 00:08:15.484 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-25 00:34:37.897265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.485 [2024-07-25 00:34:37.897352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3002:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:08:15.485 passed 00:08:15.485 Test: test_alloc_io_qpair_fail ...[2024-07-25 00:34:37.898506] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.485 passed 00:08:15.485 Test: test_nvme_ctrlr_add_remove_process ...passed 00:08:15.485 Test: test_nvme_ctrlr_set_arbitration_feature ...[2024-07-25 00:34:37.898605] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:08:15.485 passed 00:08:15.485 Test: test_nvme_ctrlr_set_state ...passed 00:08:15.485 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-25 00:34:37.898783] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1546:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:08:15.485 [2024-07-25 00:34:37.898853] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.485 passed 00:08:15.485 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-25 00:34:37.924778] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.485 passed 00:08:15.485 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-25 00:34:37.972686] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.485 passed 00:08:15.485 Test: test_nvme_ctrlr_reset ...[2024-07-25 00:34:37.974323] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.485 passed 00:08:15.485 Test: test_nvme_ctrlr_aer_callback ...[2024-07-25 00:34:37.974692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.485 passed 00:08:15.485 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-25 00:34:37.976155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.485 passed 00:08:15.485 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:08:15.485 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:08:15.485 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-25 00:34:37.977978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.485 passed 00:08:15.485 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:08:15.485 Test: test_nvme_ctrlr_ana_resize ...[2024-07-25 00:34:37.979321] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.485 passed 00:08:15.485 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:08:15.485 Test: test_nvme_transport_ctrlr_ready ...[2024-07-25 00:34:37.980944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:08:15.485 [2024-07-25 00:34:37.981004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4204:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:08:15.485 passed 00:08:15.485 Test: test_nvme_ctrlr_disable ...[2024-07-25 00:34:37.981057] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:15.485 passed 00:08:15.485 00:08:15.485 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.485 suites 1 1 n/a 0 0 00:08:15.485 tests 44 44 44 0 0 00:08:15.485 asserts 10434 10434 10434 0 n/a 00:08:15.485 00:08:15.485 Elapsed time = 0.419 seconds 00:08:15.485 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:08:15.485 00:08:15.485 00:08:15.485 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.485 http://cunit.sourceforge.net/ 00:08:15.485 00:08:15.485 00:08:15.485 Suite: nvme_ctrlr_cmd 00:08:15.485 Test: test_get_log_pages ...passed 00:08:15.485 Test: test_set_feature_cmd ...passed 00:08:15.485 Test: test_set_feature_ns_cmd ...passed 00:08:15.485 Test: test_get_feature_cmd ...passed 00:08:15.485 Test: test_get_feature_ns_cmd ...passed 00:08:15.485 Test: test_abort_cmd ...passed 00:08:15.485 Test: test_set_host_id_cmds ...[2024-07-25 00:34:38.037028] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:08:15.485 passed 00:08:15.485 Test: test_io_cmd_raw_no_payload_build ...passed 00:08:15.485 Test: test_io_raw_cmd ...passed 00:08:15.485 Test: test_io_raw_cmd_with_md ...passed 00:08:15.485 Test: test_namespace_attach ...passed 00:08:15.485 Test: test_namespace_detach ...passed 00:08:15.485 Test: test_namespace_create ...passed 00:08:15.485 Test: test_namespace_delete ...passed 00:08:15.485 Test: test_doorbell_buffer_config ...passed 00:08:15.485 Test: test_format_nvme ...passed 00:08:15.485 Test: test_fw_commit ...passed 00:08:15.485 Test: test_fw_image_download ...passed 00:08:15.485 Test: test_sanitize ...passed 00:08:15.485 Test: test_directive ...passed 00:08:15.485 Test: test_nvme_request_add_abort ...passed 00:08:15.485 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:08:15.485 Test: test_nvme_ctrlr_cmd_identify ...passed 00:08:15.485 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:08:15.485 00:08:15.485 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.485 suites 1 1 n/a 0 0 00:08:15.485 tests 24 24 24 0 0 00:08:15.485 asserts 198 198 198 0 n/a 00:08:15.485 00:08:15.485 Elapsed time = 0.001 seconds 00:08:15.485 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:08:15.485 00:08:15.485 00:08:15.485 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.485 http://cunit.sourceforge.net/ 00:08:15.485 00:08:15.485 00:08:15.485 Suite: nvme_ctrlr_cmd 00:08:15.485 Test: test_geometry_cmd ...passed 00:08:15.485 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:08:15.485 00:08:15.485 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.485 suites 1 1 n/a 0 0 00:08:15.485 tests 2 2 2 0 0 00:08:15.485 asserts 7 7 7 0 n/a 00:08:15.485 00:08:15.485 Elapsed time = 0.000 seconds 00:08:15.485 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:08:15.485 00:08:15.485 00:08:15.485 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.485 http://cunit.sourceforge.net/ 00:08:15.485 00:08:15.485 00:08:15.485 Suite: nvme 00:08:15.485 Test: test_nvme_ns_construct ...passed 00:08:15.485 Test: test_nvme_ns_uuid ...passed 00:08:15.485 Test: test_nvme_ns_csi ...passed 00:08:15.485 Test: test_nvme_ns_data ...passed 00:08:15.485 Test: test_nvme_ns_set_identify_data ...passed 00:08:15.485 Test: test_spdk_nvme_ns_get_values ...passed 00:08:15.485 Test: test_spdk_nvme_ns_is_active ...passed 00:08:15.485 Test: spdk_nvme_ns_supports ...passed 00:08:15.485 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:08:15.485 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:08:15.485 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:08:15.485 Test: test_nvme_ns_find_id_desc ...passed 00:08:15.485 00:08:15.485 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.485 suites 1 1 n/a 0 0 00:08:15.485 tests 12 12 12 0 0 00:08:15.485 asserts 95 95 95 0 n/a 00:08:15.485 00:08:15.485 Elapsed time = 0.001 seconds 00:08:15.744 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:08:15.744 00:08:15.744 00:08:15.744 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.744 http://cunit.sourceforge.net/ 00:08:15.744 00:08:15.744 00:08:15.744 Suite: nvme_ns_cmd 00:08:15.744 Test: split_test ...passed 00:08:15.744 Test: split_test2 ...passed 00:08:15.744 Test: split_test3 ...passed 00:08:15.744 Test: split_test4 ...passed 00:08:15.744 Test: test_nvme_ns_cmd_flush ...passed 00:08:15.744 Test: test_nvme_ns_cmd_dataset_management ...passed 00:08:15.744 Test: test_nvme_ns_cmd_copy ...passed 00:08:15.744 Test: test_io_flags ...[2024-07-25 00:34:38.158280] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:08:15.744 passed 00:08:15.744 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:08:15.744 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:08:15.744 Test: test_nvme_ns_cmd_reservation_register ...passed 00:08:15.744 Test: test_nvme_ns_cmd_reservation_release ...passed 00:08:15.744 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:08:15.744 Test: test_nvme_ns_cmd_reservation_report ...passed 00:08:15.744 Test: test_cmd_child_request ...passed 00:08:15.744 Test: test_nvme_ns_cmd_readv ...passed 00:08:15.744 Test: test_nvme_ns_cmd_read_with_md ...passed 00:08:15.744 Test: test_nvme_ns_cmd_writev ...[2024-07-25 00:34:38.159616] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:08:15.744 passed 00:08:15.744 Test: test_nvme_ns_cmd_write_with_md ...passed 00:08:15.744 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:08:15.744 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:08:15.744 Test: test_nvme_ns_cmd_comparev ...passed 00:08:15.744 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:08:15.744 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:08:15.744 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:08:15.744 Test: test_nvme_ns_cmd_setup_request ...passed 00:08:15.744 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:08:15.744 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-25 00:34:38.161566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:15.744 passed 00:08:15.744 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-25 00:34:38.161677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:15.744 passed 00:08:15.744 Test: test_nvme_ns_cmd_verify ...passed 00:08:15.744 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:08:15.744 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:08:15.744 00:08:15.744 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.744 suites 1 1 n/a 0 0 00:08:15.744 tests 32 32 32 0 0 00:08:15.744 asserts 550 550 550 0 n/a 00:08:15.744 00:08:15.744 Elapsed time = 0.005 seconds 00:08:15.744 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:08:15.744 00:08:15.744 00:08:15.744 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.744 http://cunit.sourceforge.net/ 00:08:15.744 00:08:15.744 00:08:15.744 Suite: nvme_ns_cmd 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:08:15.744 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:08:15.744 00:08:15.744 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.744 suites 1 1 n/a 0 0 00:08:15.744 tests 12 12 12 0 0 00:08:15.744 asserts 123 123 123 0 n/a 00:08:15.744 00:08:15.744 Elapsed time = 0.001 seconds 00:08:15.744 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:15.744 00:08:15.744 00:08:15.744 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.744 http://cunit.sourceforge.net/ 00:08:15.744 00:08:15.744 00:08:15.744 Suite: nvme_qpair 00:08:15.744 Test: test3 ...passed 00:08:15.744 Test: test_ctrlr_failed ...passed 00:08:15.744 Test: struct_packing ...passed 00:08:15.744 Test: test_nvme_qpair_process_completions ...[2024-07-25 00:34:38.242553] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:15.745 [2024-07-25 00:34:38.242905] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:15.745 [2024-07-25 00:34:38.242973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:15.745 passed 00:08:15.745 Test: test_nvme_completion_is_retry ...[2024-07-25 00:34:38.243077] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:15.745 passed 00:08:15.745 Test: test_get_status_string ...passed 00:08:15.745 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:08:15.745 Test: test_nvme_qpair_submit_request ...passed 00:08:15.745 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:15.745 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:15.745 Test: test_nvme_qpair_init_deinit ...passed 00:08:15.745 Test: test_nvme_get_sgl_print_info ...[2024-07-25 00:34:38.243546] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:15.745 passed 00:08:15.745 00:08:15.745 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.745 suites 1 1 n/a 0 0 00:08:15.745 tests 12 12 12 0 0 00:08:15.745 asserts 154 154 154 0 n/a 00:08:15.745 00:08:15.745 Elapsed time = 0.001 seconds 00:08:15.745 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:15.745 00:08:15.745 00:08:15.745 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.745 http://cunit.sourceforge.net/ 00:08:15.745 00:08:15.745 00:08:15.745 Suite: nvme_pcie 00:08:15.745 Test: test_prp_list_append ...[2024-07-25 00:34:38.285182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:15.745 [2024-07-25 00:34:38.285574] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:15.745 [2024-07-25 00:34:38.285635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1225:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:15.745 [2024-07-25 00:34:38.285915] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:15.745 passed 00:08:15.745 Test: test_nvme_pcie_hotplug_monitor ...[2024-07-25 00:34:38.286027] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:15.745 passed 00:08:15.745 Test: test_shadow_doorbell_update ...passed 00:08:15.745 Test: test_build_contig_hw_sgl_request ...passed 00:08:15.745 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:15.745 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:15.745 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:08:15.745 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:08:15.745 Test: test_nvme_pcie_ctrlr_regs_get_set ...[2024-07-25 00:34:38.286305] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:15.745 passed 00:08:15.745 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:08:15.745 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:08:15.745 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-25 00:34:38.286412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:15.745 passed 00:08:15.745 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:08:15.745 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-25 00:34:38.286503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:15.745 [2024-07-25 00:34:38.286556] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:15.745 [2024-07-25 00:34:38.286617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:15.745 passed 00:08:15.745 00:08:15.745 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.745 suites 1 1 n/a 0 0 00:08:15.745 tests 14 14 14 0 0 00:08:15.745 asserts 235 235 235 0 n/a 00:08:15.745 00:08:15.745 Elapsed time = 0.002 seconds 00:08:15.745 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:15.745 00:08:15.745 00:08:15.745 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.745 http://cunit.sourceforge.net/ 00:08:15.745 00:08:15.745 00:08:15.745 Suite: nvme_ns_cmd 00:08:15.745 Test: nvme_poll_group_create_test ...passed 00:08:15.745 Test: nvme_poll_group_add_remove_test ...passed 00:08:15.745 Test: nvme_poll_group_process_completions ...passed 00:08:15.745 Test: nvme_poll_group_destroy_test ...passed 00:08:15.745 Test: nvme_poll_group_get_free_stats ...passed 00:08:15.745 00:08:15.745 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.745 suites 1 1 n/a 0 0 00:08:15.745 tests 5 5 5 0 0 00:08:15.745 asserts 75 75 75 0 n/a 00:08:15.745 00:08:15.745 Elapsed time = 0.000 seconds 00:08:15.745 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:15.745 00:08:15.745 00:08:15.745 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.745 http://cunit.sourceforge.net/ 00:08:15.745 00:08:15.745 00:08:15.745 Suite: nvme_quirks 00:08:15.745 Test: test_nvme_quirks_striping ...passed 00:08:15.745 00:08:15.745 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.745 suites 1 1 n/a 0 0 00:08:15.745 tests 1 1 1 0 0 00:08:15.745 asserts 5 5 5 0 n/a 00:08:15.745 00:08:15.745 Elapsed time = 0.000 seconds 00:08:15.745 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:16.004 00:08:16.004 00:08:16.004 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.004 http://cunit.sourceforge.net/ 00:08:16.004 00:08:16.004 00:08:16.004 Suite: nvme_tcp 00:08:16.004 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:16.004 Test: test_nvme_tcp_build_iovs ...passed 00:08:16.004 Test: test_nvme_tcp_build_sgl_request ...[2024-07-25 00:34:38.411086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7fff537e53f0, and the iovcnt=16, remaining_size=28672 00:08:16.004 passed 00:08:16.004 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:08:16.004 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:16.004 Test: test_nvme_tcp_req_complete_safe ...passed 00:08:16.004 Test: test_nvme_tcp_req_get ...passed 00:08:16.004 Test: test_nvme_tcp_req_init ...passed 00:08:16.004 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:16.004 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:16.004 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-25 00:34:38.412328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e7130 is same with the state(6) to be set 00:08:16.004 passed 00:08:16.004 Test: test_nvme_tcp_alloc_reqs ...passed 00:08:16.004 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-25 00:34:38.412913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e62e0 is same with the state(5) to be set 00:08:16.004 passed 00:08:16.004 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-25 00:34:38.413120] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7fff537e6e70 00:08:16.004 [2024-07-25 00:34:38.413295] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1249:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:16.004 [2024-07-25 00:34:38.413555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e67a0 is same with the state(5) to be set 00:08:16.004 [2024-07-25 00:34:38.413763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:16.004 [2024-07-25 00:34:38.413994] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e67a0 is same with the state(5) to be set 00:08:16.004 [2024-07-25 00:34:38.414333] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:16.004 [2024-07-25 00:34:38.414548] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e67a0 is same with the state(5) to be set 00:08:16.004 [2024-07-25 00:34:38.414730] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e67a0 is same with the state(5) to be set 00:08:16.004 [2024-07-25 00:34:38.414902] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e67a0 is same with the state(5) to be set 00:08:16.004 [2024-07-25 00:34:38.415098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e67a0 is same with the state(5) to be set 00:08:16.004 [2024-07-25 00:34:38.415269] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e67a0 is same with the state(5) to be set 00:08:16.004 [2024-07-25 00:34:38.415449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e67a0 is same with the state(5) to be set 00:08:16.004 passed 00:08:16.004 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-25 00:34:38.415769] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:16.004 [2024-07-25 00:34:38.415948] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:16.004 [2024-07-25 00:34:38.416406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:16.004 passed 00:08:16.004 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:16.004 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-25 00:34:38.416718] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff537e69b0): PDU Sequence Error 00:08:16.004 passed 00:08:16.004 Test: test_nvme_tcp_icresp_handle ...[2024-07-25 00:34:38.416925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:16.004 [2024-07-25 00:34:38.417096] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:16.004 [2024-07-25 00:34:38.417274] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e62f0 is same with the state(5) to be set 00:08:16.004 [2024-07-25 00:34:38.417454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:16.004 [2024-07-25 00:34:38.417641] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e62f0 is same with the state(5) to be set 00:08:16.004 [2024-07-25 00:34:38.417837] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e62f0 is same with the state(0) to be set 00:08:16.004 passed 00:08:16.004 Test: test_nvme_tcp_pdu_payload_handle ...[2024-07-25 00:34:38.418056] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff537e6e70): PDU Sequence Error 00:08:16.004 passed 00:08:16.004 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-25 00:34:38.418320] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7fff537e55b0 00:08:16.004 passed 00:08:16.004 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:08:16.004 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-25 00:34:38.418654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7fff537e4c30, errno=0, rc=0 00:08:16.004 [2024-07-25 00:34:38.418851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e4c30 is same with the state(5) to be set 00:08:16.004 [2024-07-25 00:34:38.419060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff537e4c30 is same with the state(5) to be set 00:08:16.004 [2024-07-25 00:34:38.419237] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff537e4c30 (0): Success 00:08:16.004 [2024-07-25 00:34:38.419416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff537e4c30 (0): Success 00:08:16.004 passed 00:08:16.004 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-25 00:34:38.568812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:16.004 [2024-07-25 00:34:38.569218] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:16.004 passed 00:08:16.004 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:08:16.004 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-25 00:34:38.569683] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:16.005 [2024-07-25 00:34:38.569863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:16.005 passed 00:08:16.005 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-25 00:34:38.570275] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:16.005 [2024-07-25 00:34:38.570477] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:16.005 [2024-07-25 00:34:38.570764] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:16.005 [2024-07-25 00:34:38.570975] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:16.005 [2024-07-25 00:34:38.571250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:08:16.005 [2024-07-25 00:34:38.571478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:16.005 passed 00:08:16.005 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-25 00:34:38.571775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x614000000c40, and the iovcnt=1, remaining_size=1024 00:08:16.005 [2024-07-25 00:34:38.571944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:16.005 passed 00:08:16.005 00:08:16.005 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.005 suites 1 1 n/a 0 0 00:08:16.005 tests 27 27 27 0 0 00:08:16.005 asserts 624 624 624 0 n/a 00:08:16.005 00:08:16.005 Elapsed time = 0.156 seconds 00:08:16.005 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:16.005 00:08:16.005 00:08:16.005 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.005 http://cunit.sourceforge.net/ 00:08:16.005 00:08:16.005 00:08:16.005 Suite: nvme_transport 00:08:16.005 Test: test_nvme_get_transport ...passed 00:08:16.005 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:16.005 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:16.005 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:16.005 Test: test_ctrlr_get_memory_domains ...passed 00:08:16.005 00:08:16.005 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.005 suites 1 1 n/a 0 0 00:08:16.005 tests 5 5 5 0 0 00:08:16.005 asserts 28 28 28 0 n/a 00:08:16.005 00:08:16.005 Elapsed time = 0.000 seconds 00:08:16.263 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:16.263 00:08:16.263 00:08:16.263 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.263 http://cunit.sourceforge.net/ 00:08:16.263 00:08:16.263 00:08:16.263 Suite: nvme_io_msg 00:08:16.263 Test: test_nvme_io_msg_send ...passed 00:08:16.263 Test: test_nvme_io_msg_process ...passed 00:08:16.263 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:16.263 00:08:16.263 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.263 suites 1 1 n/a 0 0 00:08:16.263 tests 3 3 3 0 0 00:08:16.263 asserts 56 56 56 0 n/a 00:08:16.263 00:08:16.263 Elapsed time = 0.000 seconds 00:08:16.263 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:16.263 00:08:16.263 00:08:16.263 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.263 http://cunit.sourceforge.net/ 00:08:16.263 00:08:16.263 00:08:16.263 Suite: nvme_pcie_common 00:08:16.263 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-25 00:34:38.718062] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:16.263 passed 00:08:16.263 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:08:16.263 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:16.263 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-25 00:34:38.719155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 505:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:16.263 [2024-07-25 00:34:38.719334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 458:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:16.263 passed 00:08:16.263 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-25 00:34:38.719410] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 552:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:16.263 passed 00:08:16.263 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-25 00:34:38.720017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:16.263 passed[2024-07-25 00:34:38.720095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:16.263 00:08:16.263 00:08:16.264 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.264 suites 1 1 n/a 0 0 00:08:16.264 tests 6 6 6 0 0 00:08:16.264 asserts 148 148 148 0 n/a 00:08:16.264 00:08:16.264 Elapsed time = 0.002 seconds 00:08:16.264 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:16.264 00:08:16.264 00:08:16.264 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.264 http://cunit.sourceforge.net/ 00:08:16.264 00:08:16.264 00:08:16.264 Suite: nvme_fabric 00:08:16.264 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:16.264 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:16.264 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:16.264 Test: test_nvme_fabric_discover_probe ...passed 00:08:16.264 Test: test_nvme_fabric_qpair_connect ...[2024-07-25 00:34:38.761524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:16.264 passed 00:08:16.264 00:08:16.264 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.264 suites 1 1 n/a 0 0 00:08:16.264 tests 5 5 5 0 0 00:08:16.264 asserts 60 60 60 0 n/a 00:08:16.264 00:08:16.264 Elapsed time = 0.001 seconds 00:08:16.264 00:34:38 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:16.264 00:08:16.264 00:08:16.264 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.264 http://cunit.sourceforge.net/ 00:08:16.264 00:08:16.264 00:08:16.264 Suite: nvme_opal 00:08:16.264 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:16.264 Test: test_opal_add_short_atom_header ...passed 00:08:16.264 00:08:16.264 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.264 suites 1 1 n/a 0 0 00:08:16.264 tests 2 2 2 0 0 00:08:16.264 asserts 22 22 22 0 n/a 00:08:16.264 00:08:16.264 Elapsed time = 0.000 seconds 00:08:16.264 [2024-07-25 00:34:38.805277] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:16.264 00:08:16.264 real 0m1.485s 00:08:16.264 user 0m0.745s 00:08:16.264 sys 0m0.591s 00:08:16.264 00:34:38 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.264 00:34:38 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.264 ************************************ 00:08:16.264 END TEST unittest_nvme 00:08:16.264 ************************************ 00:08:16.264 00:34:38 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:16.264 00:34:38 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:16.264 00:34:38 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.264 00:34:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:16.264 ************************************ 00:08:16.264 START TEST unittest_log 00:08:16.264 ************************************ 00:08:16.264 00:34:38 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:16.264 00:08:16.264 00:08:16.264 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.264 http://cunit.sourceforge.net/ 00:08:16.264 00:08:16.264 00:08:16.264 Suite: log 00:08:16.264 Test: log_test ...[2024-07-25 00:34:38.913981] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:08:16.264 [2024-07-25 00:34:38.914308] log_ut.c: 57:log_test: *DEBUG*: log test 00:08:16.264 log dump test: 00:08:16.264 passed 00:08:16.264 Test: deprecation ...00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:16.264 spdk dump test: 00:08:16.264 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:16.264 spdk dump test: 00:08:16.264 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:16.264 00000010 65 20 63 68 61 72 73 e chars 00:08:17.639 passed 00:08:17.639 00:08:17.639 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.639 suites 1 1 n/a 0 0 00:08:17.639 tests 2 2 2 0 0 00:08:17.639 asserts 73 73 73 0 n/a 00:08:17.639 00:08:17.639 Elapsed time = 0.001 seconds 00:08:17.639 00:08:17.639 real 0m1.042s 00:08:17.639 user 0m0.019s 00:08:17.639 sys 0m0.023s 00:08:17.639 00:34:39 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.639 00:34:39 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:08:17.639 ************************************ 00:08:17.639 END TEST unittest_log 00:08:17.639 ************************************ 00:08:17.639 00:34:39 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:17.639 00:34:39 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.639 00:34:39 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.639 00:34:39 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:17.639 ************************************ 00:08:17.639 START TEST unittest_lvol 00:08:17.639 ************************************ 00:08:17.639 00:34:40 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:17.639 00:08:17.639 00:08:17.639 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.639 http://cunit.sourceforge.net/ 00:08:17.639 00:08:17.639 00:08:17.639 Suite: lvol 00:08:17.639 Test: lvs_init_unload_success ...[2024-07-25 00:34:40.032174] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:17.639 passed 00:08:17.639 Test: lvs_init_destroy_success ...[2024-07-25 00:34:40.033310] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:17.639 passed 00:08:17.639 Test: lvs_init_opts_success ...passed 00:08:17.639 Test: lvs_unload_lvs_is_null_fail ...[2024-07-25 00:34:40.033756] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:17.639 passed 00:08:17.639 Test: lvs_names ...[2024-07-25 00:34:40.033947] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:17.639 [2024-07-25 00:34:40.034120] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:17.639 [2024-07-25 00:34:40.034575] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:17.639 passed 00:08:17.639 Test: lvol_create_destroy_success ...passed 00:08:17.639 Test: lvol_create_fail ...[2024-07-25 00:34:40.035468] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:17.639 [2024-07-25 00:34:40.035737] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:17.639 passed 00:08:17.639 Test: lvol_destroy_fail ...[2024-07-25 00:34:40.036269] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:17.639 passed 00:08:17.639 Test: lvol_close ...[2024-07-25 00:34:40.036665] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:17.639 [2024-07-25 00:34:40.036832] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:17.639 passed 00:08:17.639 Test: lvol_resize ...passed 00:08:17.639 Test: lvol_set_read_only ...passed 00:08:17.639 Test: test_lvs_load ...[2024-07-25 00:34:40.038042] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:17.639 [2024-07-25 00:34:40.038199] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:17.639 passed 00:08:17.639 Test: lvols_load ...[2024-07-25 00:34:40.038868] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:17.639 [2024-07-25 00:34:40.039148] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:17.639 passed 00:08:17.639 Test: lvol_open ...passed 00:08:17.639 Test: lvol_snapshot ...passed 00:08:17.639 Test: lvol_snapshot_fail ...[2024-07-25 00:34:40.040214] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:17.639 passed 00:08:17.639 Test: lvol_clone ...passed 00:08:17.639 Test: lvol_clone_fail ...[2024-07-25 00:34:40.041132] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:17.639 passed 00:08:17.639 Test: lvol_iter_clones ...passed 00:08:17.639 Test: lvol_refcnt ...[2024-07-25 00:34:40.041961] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 27041225-4bcb-42ee-b6e1-b003c08cc929 because it is still open 00:08:17.639 passed 00:08:17.639 Test: lvol_names ...[2024-07-25 00:34:40.042358] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:17.639 [2024-07-25 00:34:40.042589] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:17.640 [2024-07-25 00:34:40.043016] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:17.640 passed 00:08:17.640 Test: lvol_create_thin_provisioned ...passed 00:08:17.640 Test: lvol_rename ...[2024-07-25 00:34:40.043760] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:17.640 [2024-07-25 00:34:40.043979] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:17.640 passed 00:08:17.640 Test: lvs_rename ...[2024-07-25 00:34:40.044425] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:17.640 passed 00:08:17.640 Test: lvol_inflate ...[2024-07-25 00:34:40.044854] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:17.640 passed 00:08:17.640 Test: lvol_decouple_parent ...[2024-07-25 00:34:40.045268] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:17.640 passed 00:08:17.640 Test: lvol_get_xattr ...passed 00:08:17.640 Test: lvol_esnap_reload ...passed 00:08:17.640 Test: lvol_esnap_create_bad_args ...[2024-07-25 00:34:40.045945] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:17.640 [2024-07-25 00:34:40.046104] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:17.640 [2024-07-25 00:34:40.046327] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:17.640 [2024-07-25 00:34:40.046583] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:17.640 [2024-07-25 00:34:40.046890] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:17.640 passed 00:08:17.640 Test: lvol_esnap_create_delete ...passed 00:08:17.640 Test: lvol_esnap_load_esnaps ...[2024-07-25 00:34:40.047389] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:17.640 passed 00:08:17.640 Test: lvol_esnap_missing ...[2024-07-25 00:34:40.047674] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:17.640 [2024-07-25 00:34:40.047859] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:17.640 passed 00:08:17.640 Test: lvol_esnap_hotplug ... 00:08:17.640 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:17.640 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:17.640 [2024-07-25 00:34:40.048725] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 97a89e41-2789-46ec-9405-da880fb604f3: failed to create esnap bs_dev: error -12 00:08:17.640 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:17.640 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:17.640 [2024-07-25 00:34:40.049091] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol eb2a3005-2ef6-48c1-a285-7502f1d413d1: failed to create esnap bs_dev: error -12 00:08:17.640 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:17.640 [2024-07-25 00:34:40.049350] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 84202d3c-c18b-4934-a653-22a3662dc6f7: failed to create esnap bs_dev: error -12 00:08:17.640 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:17.640 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:17.640 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:17.640 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:17.640 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:17.640 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:17.640 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:17.640 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:17.640 passed 00:08:17.640 Test: lvol_get_by ...passed 00:08:17.640 Test: lvol_shallow_copy ...[2024-07-25 00:34:40.050782] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:08:17.640 [2024-07-25 00:34:40.050971] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol cd167b6d-5b0f-4994-aae7-16559b94faca shallow copy, ext_dev must not be NULL 00:08:17.640 passed 00:08:17.640 Test: lvol_set_parent ...[2024-07-25 00:34:40.051455] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:08:17.640 [2024-07-25 00:34:40.051617] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:08:17.640 passed 00:08:17.640 Test: lvol_set_external_parent ...[2024-07-25 00:34:40.052029] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:08:17.640 [2024-07-25 00:34:40.052180] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:08:17.640 [2024-07-25 00:34:40.052374] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:08:17.640 passed 00:08:17.640 00:08:17.640 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.640 suites 1 1 n/a 0 0 00:08:17.640 tests 37 37 37 0 0 00:08:17.640 asserts 1505 1505 1505 0 n/a 00:08:17.640 00:08:17.640 Elapsed time = 0.016 seconds 00:08:17.640 00:08:17.640 real 0m0.067s 00:08:17.640 user 0m0.034s 00:08:17.640 sys 0m0.029s 00:08:17.640 00:34:40 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.640 00:34:40 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:17.640 ************************************ 00:08:17.640 END TEST unittest_lvol 00:08:17.640 ************************************ 00:08:17.640 00:34:40 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:17.640 00:34:40 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:17.640 00:34:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.640 00:34:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.640 00:34:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:17.640 ************************************ 00:08:17.640 START TEST unittest_nvme_rdma 00:08:17.640 ************************************ 00:08:17.640 00:34:40 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:17.640 00:08:17.640 00:08:17.640 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.640 http://cunit.sourceforge.net/ 00:08:17.640 00:08:17.640 00:08:17.640 Suite: nvme_rdma 00:08:17.640 Test: test_nvme_rdma_build_sgl_request ...[2024-07-25 00:34:40.170370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:17.640 [2024-07-25 00:34:40.170770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1552:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:17.640 passed 00:08:17.640 Test: test_nvme_rdma_build_sgl_inline_request ...[2024-07-25 00:34:40.170949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1608:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:17.640 passed 00:08:17.640 Test: test_nvme_rdma_build_contig_request ...[2024-07-25 00:34:40.171076] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1489:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:17.640 passed 00:08:17.640 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:08:17.640 Test: test_nvme_rdma_create_reqs ...[2024-07-25 00:34:40.171276] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:17.640 passed 00:08:17.640 Test: test_nvme_rdma_create_rsps ...[2024-07-25 00:34:40.171848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:17.640 passed 00:08:17.640 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-25 00:34:40.172109] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:17.640 passed 00:08:17.640 Test: test_nvme_rdma_poller_create ...[2024-07-25 00:34:40.172211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:17.640 passed 00:08:17.640 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:08:17.640 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-25 00:34:40.172500] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:17.640 passed 00:08:17.640 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:17.640 Test: test_nvme_rdma_req_init ...passed 00:08:17.640 Test: test_nvme_rdma_validate_cm_event ...[2024-07-25 00:34:40.172959] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:17.640 [2024-07-25 00:34:40.173038] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:17.640 passed 00:08:17.640 Test: test_nvme_rdma_qpair_init ...passed 00:08:17.640 Test: test_nvme_rdma_qpair_submit_request ...passed 00:08:17.640 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:17.640 Test: test_rdma_get_memory_translation ...[2024-07-25 00:34:40.173266] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:17.640 passed 00:08:17.640 Test: test_get_rdma_qpair_from_wc ...passed 00:08:17.640 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:17.640 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-25 00:34:40.173357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:17.640 [2024-07-25 00:34:40.173546] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:17.640 [2024-07-25 00:34:40.173650] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:17.640 passed 00:08:17.640 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-25 00:34:40.173948] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:17.640 [2024-07-25 00:34:40.174035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:17.641 [2024-07-25 00:34:40.174134] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff1b65e530 on poll group 0x60c000000040 00:08:17.641 [2024-07-25 00:34:40.174222] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:17.641 [2024-07-25 00:34:40.174372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:17.641 [2024-07-25 00:34:40.174454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff1b65e530 on poll group 0x60c000000040 00:08:17.641 [2024-07-25 00:34:40.174591] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:17.641 passed 00:08:17.641 00:08:17.641 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.641 suites 1 1 n/a 0 0 00:08:17.641 tests 21 21 21 0 0 00:08:17.641 asserts 397 397 397 0 n/a 00:08:17.641 00:08:17.641 Elapsed time = 0.005 seconds 00:08:17.641 00:08:17.641 real 0m0.051s 00:08:17.641 user 0m0.020s 00:08:17.641 sys 0m0.032s 00:08:17.641 00:34:40 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.641 00:34:40 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:17.641 ************************************ 00:08:17.641 END TEST unittest_nvme_rdma 00:08:17.641 ************************************ 00:08:17.641 00:34:40 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:17.641 00:34:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.641 00:34:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.641 00:34:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:17.641 ************************************ 00:08:17.641 START TEST unittest_nvmf_transport 00:08:17.641 ************************************ 00:08:17.641 00:34:40 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:17.641 00:08:17.641 00:08:17.641 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.641 http://cunit.sourceforge.net/ 00:08:17.641 00:08:17.641 00:08:17.641 Suite: nvmf 00:08:17.641 Test: test_spdk_nvmf_transport_create ...[2024-07-25 00:34:40.282595] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:17.641 [2024-07-25 00:34:40.282984] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:17.641 [2024-07-25 00:34:40.283067] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:17.641 passed 00:08:17.641 Test: test_nvmf_transport_poll_group_create ...[2024-07-25 00:34:40.283222] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:17.641 passed 00:08:17.641 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-25 00:34:40.283526] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:17.641 [2024-07-25 00:34:40.283630] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:17.641 passed 00:08:17.641 Test: test_spdk_nvmf_transport_listen_ext ...[2024-07-25 00:34:40.283671] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:17.641 passed 00:08:17.641 00:08:17.641 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.641 suites 1 1 n/a 0 0 00:08:17.641 tests 4 4 4 0 0 00:08:17.641 asserts 49 49 49 0 n/a 00:08:17.641 00:08:17.641 Elapsed time = 0.001 seconds 00:08:17.899 00:08:17.899 real 0m0.047s 00:08:17.899 user 0m0.025s 00:08:17.899 sys 0m0.023s 00:08:17.899 00:34:40 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.899 00:34:40 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:08:17.899 ************************************ 00:08:17.899 END TEST unittest_nvmf_transport 00:08:17.899 ************************************ 00:08:17.899 00:34:40 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:17.899 00:34:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.899 00:34:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.899 00:34:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:17.899 ************************************ 00:08:17.899 START TEST unittest_rdma 00:08:17.899 ************************************ 00:08:17.899 00:34:40 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:17.899 00:08:17.899 00:08:17.899 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.899 http://cunit.sourceforge.net/ 00:08:17.899 00:08:17.899 00:08:17.899 Suite: rdma_common 00:08:17.899 Test: test_spdk_rdma_pd ...[2024-07-25 00:34:40.392460] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:08:17.899 passed 00:08:17.899 00:08:17.899 [2024-07-25 00:34:40.392864] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:08:17.899 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.899 suites 1 1 n/a 0 0 00:08:17.899 tests 1 1 1 0 0 00:08:17.899 asserts 31 31 31 0 n/a 00:08:17.899 00:08:17.899 Elapsed time = 0.001 seconds 00:08:17.899 00:08:17.899 real 0m0.037s 00:08:17.899 user 0m0.013s 00:08:17.899 sys 0m0.025s 00:08:17.899 00:34:40 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.899 00:34:40 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:17.899 ************************************ 00:08:17.899 END TEST unittest_rdma 00:08:17.899 ************************************ 00:08:17.899 00:34:40 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:17.899 00:34:40 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:17.899 00:34:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.899 00:34:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.899 00:34:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:17.899 ************************************ 00:08:17.899 START TEST unittest_nvme_cuse 00:08:17.899 ************************************ 00:08:17.899 00:34:40 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:17.899 00:08:17.899 00:08:17.899 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.899 http://cunit.sourceforge.net/ 00:08:17.899 00:08:17.899 00:08:17.899 Suite: nvme_cuse 00:08:17.899 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:17.899 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:17.899 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:17.899 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:17.899 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:17.899 Test: test_cuse_nvme_submit_io ...[2024-07-25 00:34:40.493715] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:17.899 passed 00:08:17.899 Test: test_cuse_nvme_reset ...passed 00:08:17.899 Test: test_nvme_cuse_stop ...[2024-07-25 00:34:40.493988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:18.466 passed 00:08:18.466 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:18.466 00:08:18.466 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.466 suites 1 1 n/a 0 0 00:08:18.466 tests 9 9 9 0 0 00:08:18.466 asserts 118 118 118 0 n/a 00:08:18.466 00:08:18.466 Elapsed time = 0.504 seconds 00:08:18.466 00:08:18.466 real 0m0.545s 00:08:18.466 user 0m0.276s 00:08:18.466 sys 0m0.271s 00:08:18.466 00:34:41 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.466 00:34:41 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:08:18.466 ************************************ 00:08:18.466 END TEST unittest_nvme_cuse 00:08:18.466 ************************************ 00:08:18.466 00:34:41 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:08:18.466 00:34:41 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:18.466 00:34:41 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.466 00:34:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:18.466 ************************************ 00:08:18.466 START TEST unittest_nvmf 00:08:18.466 ************************************ 00:08:18.466 00:34:41 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:08:18.466 00:34:41 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:18.466 00:08:18.466 00:08:18.466 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.466 http://cunit.sourceforge.net/ 00:08:18.466 00:08:18.466 00:08:18.466 Suite: nvmf 00:08:18.466 Test: test_get_log_page ...[2024-07-25 00:34:41.116054] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:18.466 passed 00:08:18.466 Test: test_process_fabrics_cmd ...[2024-07-25 00:34:41.116444] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:08:18.466 passed 00:08:18.466 Test: test_connect ...[2024-07-25 00:34:41.117104] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:18.466 [2024-07-25 00:34:41.117220] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:18.466 [2024-07-25 00:34:41.117266] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:18.466 [2024-07-25 00:34:41.117325] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:18.466 [2024-07-25 00:34:41.117419] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:18.466 [2024-07-25 00:34:41.117486] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:18.466 [2024-07-25 00:34:41.117528] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 899:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:18.466 [2024-07-25 00:34:41.117585] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:18.466 [2024-07-25 00:34:41.117706] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:18.466 [2024-07-25 00:34:41.117789] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:18.466 [2024-07-25 00:34:41.118157] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:18.466 [2024-07-25 00:34:41.118511] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 688:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:18.466 [2024-07-25 00:34:41.118624] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 695:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:18.466 [2024-07-25 00:34:41.118702] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 719:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:18.466 [2024-07-25 00:34:41.118820] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:08:18.466 [2024-07-25 00:34:41.118997] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:08:18.466 [2024-07-25 00:34:41.119065] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:08:18.725 passed 00:08:18.725 Test: test_get_ns_id_desc_list ...passed 00:08:18.725 Test: test_identify_ns ...[2024-07-25 00:34:41.119367] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:18.725 [2024-07-25 00:34:41.119685] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:18.725 [2024-07-25 00:34:41.119819] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:18.725 passed 00:08:18.725 Test: test_identify_ns_iocs_specific ...[2024-07-25 00:34:41.119993] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:18.725 [2024-07-25 00:34:41.120287] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:18.725 passed 00:08:18.725 Test: test_reservation_write_exclusive ...passed 00:08:18.725 Test: test_reservation_exclusive_access ...passed 00:08:18.725 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:18.725 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:18.725 Test: test_reservation_notification_log_page ...passed 00:08:18.725 Test: test_get_dif_ctx ...passed 00:08:18.725 Test: test_set_get_features ...[2024-07-25 00:34:41.120924] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:18.725 passed 00:08:18.725 Test: test_identify_ctrlr ...passed 00:08:18.725 Test: test_identify_ctrlr_iocs_specific ...[2024-07-25 00:34:41.121014] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:18.725 [2024-07-25 00:34:41.121056] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:18.725 [2024-07-25 00:34:41.121092] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:18.725 passed 00:08:18.725 Test: test_custom_admin_cmd ...passed 00:08:18.725 Test: test_fused_compare_and_write ...[2024-07-25 00:34:41.121585] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4249:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:18.725 [2024-07-25 00:34:41.121641] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:18.725 [2024-07-25 00:34:41.121703] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4256:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:18.725 passed 00:08:18.725 Test: test_multi_async_event_reqs ...passed 00:08:18.725 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:18.725 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:18.725 Test: test_multi_async_events ...passed 00:08:18.725 Test: test_rae ...passed 00:08:18.725 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:18.725 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:18.725 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-25 00:34:41.122363] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:08:18.725 passed 00:08:18.725 Test: test_zcopy_read ...passed 00:08:18.725 Test: test_zcopy_write ...[2024-07-25 00:34:41.122433] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:08:18.725 passed 00:08:18.725 Test: test_nvmf_property_set ...passed 00:08:18.725 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-25 00:34:41.122591] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:18.725 passed 00:08:18.725 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-25 00:34:41.122636] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:18.725 [2024-07-25 00:34:41.122691] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1970:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:18.725 [2024-07-25 00:34:41.122740] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1976:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:18.725 [2024-07-25 00:34:41.122815] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:18.725 [2024-07-25 00:34:41.122855] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:18.725 passed 00:08:18.725 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:08:18.725 Test: test_nvmf_check_qpair_active ...[2024-07-25 00:34:41.122993] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:08:18.725 [2024-07-25 00:34:41.123036] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4755:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:08:18.725 passed 00:08:18.725 00:08:18.725 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.725 suites 1 1 n/a 0 0 00:08:18.725 tests 32 32 32 0 0 00:08:18.725 asserts 983 983 983 0 n/a 00:08:18.725 00:08:18.725 Elapsed time = 0.007 seconds 00:08:18.725 [2024-07-25 00:34:41.123080] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:08:18.725 [2024-07-25 00:34:41.123121] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:08:18.725 [2024-07-25 00:34:41.123176] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:08:18.725 00:34:41 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:18.725 00:08:18.725 00:08:18.725 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.725 http://cunit.sourceforge.net/ 00:08:18.725 00:08:18.725 00:08:18.725 Suite: nvmf 00:08:18.725 Test: test_get_rw_params ...passed 00:08:18.725 Test: test_get_rw_ext_params ...passed 00:08:18.725 Test: test_lba_in_range ...passed 00:08:18.725 Test: test_get_dif_ctx ...passed 00:08:18.726 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:18.726 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-25 00:34:41.172257] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:18.726 [2024-07-25 00:34:41.172646] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:18.726 [2024-07-25 00:34:41.172769] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:18.726 passed 00:08:18.726 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-25 00:34:41.172855] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:18.726 [2024-07-25 00:34:41.172960] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:18.726 passed 00:08:18.726 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-25 00:34:41.173100] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:18.726 [2024-07-25 00:34:41.173163] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:18.726 [2024-07-25 00:34:41.173297] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:18.726 [2024-07-25 00:34:41.173360] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:18.726 passed 00:08:18.726 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:18.726 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:08:18.726 00:08:18.726 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.726 suites 1 1 n/a 0 0 00:08:18.726 tests 10 10 10 0 0 00:08:18.726 asserts 159 159 159 0 n/a 00:08:18.726 00:08:18.726 Elapsed time = 0.001 seconds 00:08:18.726 00:34:41 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:18.726 00:08:18.726 00:08:18.726 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.726 http://cunit.sourceforge.net/ 00:08:18.726 00:08:18.726 00:08:18.726 Suite: nvmf 00:08:18.726 Test: test_discovery_log ...passed 00:08:18.726 Test: test_discovery_log_with_filters ...passed 00:08:18.726 00:08:18.726 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.726 suites 1 1 n/a 0 0 00:08:18.726 tests 2 2 2 0 0 00:08:18.726 asserts 238 238 238 0 n/a 00:08:18.726 00:08:18.726 Elapsed time = 0.003 seconds 00:08:18.726 00:34:41 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:18.726 00:08:18.726 00:08:18.726 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.726 http://cunit.sourceforge.net/ 00:08:18.726 00:08:18.726 00:08:18.726 Suite: nvmf 00:08:18.726 Test: nvmf_test_create_subsystem ...[2024-07-25 00:34:41.266582] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:18.726 [2024-07-25 00:34:41.266893] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:08:18.726 [2024-07-25 00:34:41.267076] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:18.726 [2024-07-25 00:34:41.267192] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:08:18.726 [2024-07-25 00:34:41.267236] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:18.726 [2024-07-25 00:34:41.267288] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:08:18.726 [2024-07-25 00:34:41.267378] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:18.726 [2024-07-25 00:34:41.267442] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:08:18.726 [2024-07-25 00:34:41.267484] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:18.726 [2024-07-25 00:34:41.267533] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:08:18.726 [2024-07-25 00:34:41.267574] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:18.726 [2024-07-25 00:34:41.267622] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:08:18.726 [2024-07-25 00:34:41.267753] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:18.726 [2024-07-25 00:34:41.267870] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:08:18.726 [2024-07-25 00:34:41.268000] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:18.726 [2024-07-25 00:34:41.268060] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:08:18.726 [2024-07-25 00:34:41.268178] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:18.726 [2024-07-25 00:34:41.268228] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:08:18.726 [2024-07-25 00:34:41.268275] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:18.726 [2024-07-25 00:34:41.268343] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:18.726 [2024-07-25 00:34:41.268398] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:18.726 [2024-07-25 00:34:41.268440] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:18.726 passed 00:08:18.726 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-25 00:34:41.268652] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2075:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:18.726 [2024-07-25 00:34:41.268699] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2048:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:18.726 passed 00:08:18.726 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-25 00:34:41.268992] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2178:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:08:18.726 passed 00:08:18.726 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:18.726 Test: test_spdk_nvmf_ns_visible ...[2024-07-25 00:34:41.269242] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:08:18.726 passed 00:08:18.726 Test: test_reservation_register ...[2024-07-25 00:34:41.269758] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3123:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:18.726 [2024-07-25 00:34:41.269923] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3181:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:18.726 passed 00:08:18.726 Test: test_reservation_register_with_ptpl ...passed 00:08:18.726 Test: test_reservation_acquire_preempt_1 ...[2024-07-25 00:34:41.271087] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3123:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:18.726 passed 00:08:18.726 Test: test_reservation_acquire_release_with_ptpl ...passed 00:08:18.726 Test: test_reservation_release ...[2024-07-25 00:34:41.272791] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3123:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:18.726 passed 00:08:18.726 Test: test_reservation_unregister_notification ...[2024-07-25 00:34:41.273090] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3123:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:18.726 passed 00:08:18.726 Test: test_reservation_release_notification ...[2024-07-25 00:34:41.273343] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3123:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:18.726 passed 00:08:18.726 Test: test_reservation_release_notification_write_exclusive ...[2024-07-25 00:34:41.273593] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3123:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:18.726 passed 00:08:18.726 Test: test_reservation_clear_notification ...[2024-07-25 00:34:41.273840] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3123:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:18.726 passed 00:08:18.726 Test: test_reservation_preempt_notification ...[2024-07-25 00:34:41.274114] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3123:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:18.726 passed 00:08:18.726 Test: test_spdk_nvmf_ns_event ...passed 00:08:18.726 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:18.726 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:18.726 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-25 00:34:41.275072] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:18.727 [2024-07-25 00:34:41.275173] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1058:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:08:18.727 passed 00:08:18.727 Test: test_nvmf_ns_reservation_report ...[2024-07-25 00:34:41.275307] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3486:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:18.727 passed 00:08:18.727 Test: test_nvmf_nqn_is_valid ...[2024-07-25 00:34:41.275388] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:18.727 [2024-07-25 00:34:41.275451] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:bf505c3d-77c9-4255-996f-7257c7fabe8": uuid is not the correct length 00:08:18.727 passed 00:08:18.727 Test: test_nvmf_ns_reservation_restore ...[2024-07-25 00:34:41.275498] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:18.727 passed 00:08:18.727 Test: test_nvmf_subsystem_state_change ...[2024-07-25 00:34:41.275627] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2680:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:18.727 passed 00:08:18.727 Test: test_nvmf_reservation_custom_ops ...passed 00:08:18.727 00:08:18.727 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.727 suites 1 1 n/a 0 0 00:08:18.727 tests 24 24 24 0 0 00:08:18.727 asserts 499 499 499 0 n/a 00:08:18.727 00:08:18.727 Elapsed time = 0.010 seconds 00:08:18.727 00:34:41 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:18.727 00:08:18.727 00:08:18.727 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.727 http://cunit.sourceforge.net/ 00:08:18.727 00:08:18.727 00:08:18.727 Suite: nvmf 00:08:18.727 Test: test_nvmf_tcp_create ...[2024-07-25 00:34:41.369194] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 750:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:18.727 passed 00:08:18.985 Test: test_nvmf_tcp_destroy ...passed 00:08:18.985 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:18.985 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:18.985 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:18.985 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:18.985 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:18.985 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-25 00:34:41.492860] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.492959] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff52935de0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.493078] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff52935de0 is same with the state(5) to be set 00:08:18.985 passed 00:08:18.985 Test: test_nvmf_tcp_send_capsule_resp_pdu ...[2024-07-25 00:34:41.493160] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.493225] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff52935de0 is same with the state(5) to be set 00:08:18.985 passed 00:08:18.985 Test: test_nvmf_tcp_icreq_handle ...[2024-07-25 00:34:41.493357] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:18.985 [2024-07-25 00:34:41.493504] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.493621] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff52935de0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.493697] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:18.985 [2024-07-25 00:34:41.493770] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff52935de0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.493831] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.493902] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff52935de0 is same with the state(5) to be set 00:08:18.985 passed 00:08:18.985 Test: test_nvmf_tcp_check_xfer_type ...[2024-07-25 00:34:41.493972] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.494082] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff52935de0 is same with the state(5) to be set 00:08:18.985 passed 00:08:18.985 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-25 00:34:41.494216] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2563:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:18.985 passed 00:08:18.985 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-25 00:34:41.494341] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.494411] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff52935de0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.494495] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2295:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7fff52936b40 00:08:18.985 [2024-07-25 00:34:41.494625] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.494715] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff529362a0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.494801] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2352:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7fff529362a0 00:08:18.985 [2024-07-25 00:34:41.494863] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.494925] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff529362a0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.494993] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2305:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:18.985 [2024-07-25 00:34:41.495088] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.495179] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff529362a0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.495269] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2344:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:18.985 [2024-07-25 00:34:41.495343] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.495418] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff529362a0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.495479] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.495554] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff529362a0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.495653] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.495712] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff529362a0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.495811] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.495885] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff529362a0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.495976] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.496051] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff529362a0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.496158] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 [2024-07-25 00:34:41.496224] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff529362a0 is same with the state(5) to be set 00:08:18.985 [2024-07-25 00:34:41.496313] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:18.985 passed 00:08:18.985 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-25 00:34:41.496379] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff529362a0 is same with the state(5) to be set 00:08:18.985 passed 00:08:18.985 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-25 00:34:41.522903] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:18.985 passed 00:08:18.985 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-25 00:34:41.523009] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:18.985 [2024-07-25 00:34:41.523652] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:18.985 passed 00:08:18.985 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-25 00:34:41.523736] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:18.985 [2024-07-25 00:34:41.524180] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:18.985 passed 00:08:18.985 00:08:18.985 [2024-07-25 00:34:41.524250] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:18.985 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.986 suites 1 1 n/a 0 0 00:08:18.986 tests 17 17 17 0 0 00:08:18.986 asserts 222 222 222 0 n/a 00:08:18.986 00:08:18.986 Elapsed time = 0.188 seconds 00:08:18.986 00:34:41 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:18.986 00:08:18.986 00:08:18.986 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.986 http://cunit.sourceforge.net/ 00:08:18.986 00:08:18.986 00:08:18.986 Suite: nvmf 00:08:19.244 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:19.244 00:08:19.244 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.244 suites 1 1 n/a 0 0 00:08:19.244 tests 1 1 1 0 0 00:08:19.244 asserts 17 17 17 0 n/a 00:08:19.244 00:08:19.244 Elapsed time = 0.029 seconds 00:08:19.244 00:08:19.244 real 0m0.652s 00:08:19.244 user 0m0.299s 00:08:19.244 sys 0m0.352s 00:08:19.244 00:34:41 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.244 00:34:41 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:08:19.244 ************************************ 00:08:19.244 END TEST unittest_nvmf 00:08:19.244 ************************************ 00:08:19.244 00:34:41 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:19.244 00:34:41 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:19.244 00:34:41 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:19.244 00:34:41 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:19.244 00:34:41 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.244 00:34:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:19.244 ************************************ 00:08:19.244 START TEST unittest_nvmf_rdma 00:08:19.244 ************************************ 00:08:19.244 00:34:41 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:19.244 00:08:19.244 00:08:19.244 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.244 http://cunit.sourceforge.net/ 00:08:19.244 00:08:19.244 00:08:19.244 Suite: nvmf 00:08:19.244 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-25 00:34:41.844028] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1863:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:19.244 [2024-07-25 00:34:41.844353] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:19.244 [2024-07-25 00:34:41.844407] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:19.244 passed 00:08:19.244 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:19.244 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:19.244 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:19.244 Test: test_nvmf_rdma_opts_init ...passed 00:08:19.244 Test: test_nvmf_rdma_request_free_data ...passed 00:08:19.244 Test: test_nvmf_rdma_resources_create ...passed 00:08:19.244 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:19.244 Test: test_nvmf_rdma_resize_cq ...[2024-07-25 00:34:41.846915] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:19.244 Using CQ of insufficient size may lead to CQ overrun 00:08:19.244 [2024-07-25 00:34:41.847020] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 959:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:19.244 passed 00:08:19.244 00:08:19.244 [2024-07-25 00:34:41.847074] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:19.244 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.244 suites 1 1 n/a 0 0 00:08:19.244 tests 9 9 9 0 0 00:08:19.244 asserts 579 579 579 0 n/a 00:08:19.244 00:08:19.244 Elapsed time = 0.003 seconds 00:08:19.244 00:08:19.244 real 0m0.053s 00:08:19.244 user 0m0.023s 00:08:19.244 sys 0m0.030s 00:08:19.244 00:34:41 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.244 00:34:41 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:19.244 ************************************ 00:08:19.244 END TEST unittest_nvmf_rdma 00:08:19.244 ************************************ 00:08:19.503 00:34:41 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:19.503 00:34:41 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:08:19.503 00:34:41 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:19.503 00:34:41 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.503 00:34:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:19.503 ************************************ 00:08:19.503 START TEST unittest_scsi 00:08:19.503 ************************************ 00:08:19.503 00:34:41 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:08:19.503 00:34:41 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:19.503 00:08:19.503 00:08:19.503 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.503 http://cunit.sourceforge.net/ 00:08:19.503 00:08:19.503 00:08:19.503 Suite: dev_suite 00:08:19.503 Test: dev_destruct_null_dev ...passed 00:08:19.503 Test: dev_destruct_zero_luns ...passed 00:08:19.503 Test: dev_destruct_null_lun ...passed 00:08:19.503 Test: dev_destruct_success ...passed 00:08:19.503 Test: dev_construct_num_luns_zero ...[2024-07-25 00:34:41.962169] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:19.503 passed 00:08:19.503 Test: dev_construct_no_lun_zero ...[2024-07-25 00:34:41.962630] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:19.503 passed 00:08:19.503 Test: dev_construct_null_lun ...passed 00:08:19.503 Test: dev_construct_name_too_long ...[2024-07-25 00:34:41.962701] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:19.503 [2024-07-25 00:34:41.962769] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:19.503 passed 00:08:19.503 Test: dev_construct_success ...passed 00:08:19.503 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:19.503 Test: dev_queue_mgmt_task_success ...passed 00:08:19.503 Test: dev_queue_task_success ...passed 00:08:19.503 Test: dev_stop_success ...passed 00:08:19.503 Test: dev_add_port_max_ports ...[2024-07-25 00:34:41.963115] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:19.503 passed 00:08:19.503 Test: dev_add_port_construct_failure1 ...[2024-07-25 00:34:41.963229] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:19.503 passed 00:08:19.503 Test: dev_add_port_construct_failure2 ...[2024-07-25 00:34:41.963323] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:19.503 passed 00:08:19.503 Test: dev_add_port_success1 ...passed 00:08:19.503 Test: dev_add_port_success2 ...passed 00:08:19.503 Test: dev_add_port_success3 ...passed 00:08:19.503 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:19.503 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:19.503 Test: dev_find_port_by_id_success ...passed 00:08:19.503 Test: dev_add_lun_bdev_not_found ...passed 00:08:19.503 Test: dev_add_lun_no_free_lun_id ...[2024-07-25 00:34:41.963770] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:19.503 passed 00:08:19.503 Test: dev_add_lun_success1 ...passed 00:08:19.503 Test: dev_add_lun_success2 ...passed 00:08:19.503 Test: dev_check_pending_tasks ...passed 00:08:19.503 Test: dev_iterate_luns ...passed 00:08:19.503 Test: dev_find_free_lun ...passed 00:08:19.503 00:08:19.503 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.503 suites 1 1 n/a 0 0 00:08:19.503 tests 29 29 29 0 0 00:08:19.503 asserts 97 97 97 0 n/a 00:08:19.503 00:08:19.503 Elapsed time = 0.002 seconds 00:08:19.503 00:34:41 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:19.503 00:08:19.503 00:08:19.503 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.503 http://cunit.sourceforge.net/ 00:08:19.503 00:08:19.503 00:08:19.503 Suite: lun_suite 00:08:19.503 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-25 00:34:42.015743] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:19.503 passed 00:08:19.503 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-25 00:34:42.016164] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:19.503 passed 00:08:19.503 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:19.503 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:19.503 Test: lun_task_mgmt_execute_invalid_case ...passed 00:08:19.503 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:19.503 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed[2024-07-25 00:34:42.016389] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:19.503 00:08:19.503 Test: lun_append_task_null_lun_not_supported ...passed 00:08:19.503 Test: lun_execute_scsi_task_pending ...passed 00:08:19.503 Test: lun_execute_scsi_task_complete ...passed 00:08:19.503 Test: lun_execute_scsi_task_resize ...passed 00:08:19.503 Test: lun_destruct_success ...passed 00:08:19.503 Test: lun_construct_null_ctx ...[2024-07-25 00:34:42.016656] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:19.503 passed 00:08:19.503 Test: lun_construct_success ...passed 00:08:19.503 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:08:19.503 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:19.503 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:19.503 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:19.503 00:08:19.503 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.503 suites 1 1 n/a 0 0 00:08:19.503 tests 18 18 18 0 0 00:08:19.503 asserts 153 153 153 0 n/a 00:08:19.503 00:08:19.503 Elapsed time = 0.001 seconds 00:08:19.503 00:34:42 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:19.503 00:08:19.503 00:08:19.503 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.503 http://cunit.sourceforge.net/ 00:08:19.503 00:08:19.503 00:08:19.503 Suite: scsi_suite 00:08:19.503 Test: scsi_init ...passed 00:08:19.503 00:08:19.503 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.503 suites 1 1 n/a 0 0 00:08:19.503 tests 1 1 1 0 0 00:08:19.503 asserts 1 1 1 0 n/a 00:08:19.503 00:08:19.503 Elapsed time = 0.000 seconds 00:08:19.503 00:34:42 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:19.503 00:08:19.503 00:08:19.503 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.503 http://cunit.sourceforge.net/ 00:08:19.503 00:08:19.503 00:08:19.503 Suite: translation_suite 00:08:19.503 Test: mode_select_6_test ...passed 00:08:19.503 Test: mode_select_6_test2 ...passed 00:08:19.503 Test: mode_sense_6_test ...passed 00:08:19.503 Test: mode_sense_10_test ...passed 00:08:19.503 Test: inquiry_evpd_test ...passed 00:08:19.503 Test: inquiry_standard_test ...passed 00:08:19.503 Test: inquiry_overflow_test ...passed 00:08:19.503 Test: task_complete_test ...passed 00:08:19.503 Test: lba_range_test ...passed 00:08:19.503 Test: xfer_len_test ...[2024-07-25 00:34:42.104308] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:19.503 passed 00:08:19.503 Test: xfer_test ...passed 00:08:19.503 Test: scsi_name_padding_test ...passed 00:08:19.503 Test: get_dif_ctx_test ...passed 00:08:19.503 Test: unmap_split_test ...passed 00:08:19.503 00:08:19.503 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.503 suites 1 1 n/a 0 0 00:08:19.503 tests 14 14 14 0 0 00:08:19.503 asserts 1205 1205 1205 0 n/a 00:08:19.503 00:08:19.503 Elapsed time = 0.004 seconds 00:08:19.503 00:34:42 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:19.503 00:08:19.503 00:08:19.503 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.503 http://cunit.sourceforge.net/ 00:08:19.503 00:08:19.503 00:08:19.503 Suite: reservation_suite 00:08:19.503 Test: test_reservation_register ...[2024-07-25 00:34:42.148407] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:19.503 passed 00:08:19.504 Test: test_reservation_reserve ...[2024-07-25 00:34:42.149240] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:19.504 [2024-07-25 00:34:42.149546] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:19.504 [2024-07-25 00:34:42.149835] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:19.504 passed 00:08:19.504 Test: test_all_registrant_reservation_reserve ...[2024-07-25 00:34:42.149926] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:19.504 passed 00:08:19.504 Test: test_all_registrant_reservation_access ...[2024-07-25 00:34:42.150495] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:19.504 [2024-07-25 00:34:42.150586] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:08:19.504 [2024-07-25 00:34:42.151010] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:08:19.504 passed 00:08:19.504 Test: test_reservation_preempt_non_all_regs ...[2024-07-25 00:34:42.151100] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:19.504 [2024-07-25 00:34:42.151506] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:19.504 passed 00:08:19.504 Test: test_reservation_preempt_all_regs ...[2024-07-25 00:34:42.151852] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:19.504 passed 00:08:19.504 Test: test_reservation_cmds_conflict ...[2024-07-25 00:34:42.152209] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:19.504 [2024-07-25 00:34:42.152299] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:19.504 [2024-07-25 00:34:42.152613] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:19.504 [2024-07-25 00:34:42.152656] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:19.504 [2024-07-25 00:34:42.153023] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:19.504 [2024-07-25 00:34:42.153075] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:19.504 passed 00:08:19.504 Test: test_scsi2_reserve_release ...passed 00:08:19.504 Test: test_pr_with_scsi2_reserve_release ...[2024-07-25 00:34:42.153442] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:19.504 passed 00:08:19.504 00:08:19.504 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.504 suites 1 1 n/a 0 0 00:08:19.504 tests 9 9 9 0 0 00:08:19.504 asserts 344 344 344 0 n/a 00:08:19.504 00:08:19.504 Elapsed time = 0.005 seconds 00:08:19.762 00:08:19.762 real 0m0.235s 00:08:19.762 user 0m0.094s 00:08:19.762 sys 0m0.143s 00:08:19.762 00:34:42 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.762 00:34:42 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:08:19.762 ************************************ 00:08:19.762 END TEST unittest_scsi 00:08:19.762 ************************************ 00:08:19.762 00:34:42 unittest -- unit/unittest.sh@278 -- # uname -s 00:08:19.762 00:34:42 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:08:19.762 00:34:42 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:08:19.762 00:34:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:19.762 00:34:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.762 00:34:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:19.762 ************************************ 00:08:19.762 START TEST unittest_sock 00:08:19.762 ************************************ 00:08:19.762 00:34:42 unittest.unittest_sock -- common/autotest_common.sh@1123 -- # unittest_sock 00:08:19.762 00:34:42 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:19.762 00:08:19.762 00:08:19.762 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.762 http://cunit.sourceforge.net/ 00:08:19.762 00:08:19.762 00:08:19.762 Suite: sock 00:08:19.762 Test: posix_sock ...passed 00:08:19.762 Test: ut_sock ...passed 00:08:19.762 Test: posix_sock_group ...passed 00:08:19.762 Test: ut_sock_group ...passed 00:08:19.762 Test: posix_sock_group_fairness ...passed 00:08:19.762 Test: _posix_sock_close ...passed 00:08:19.762 Test: sock_get_default_opts ...passed 00:08:19.762 Test: ut_sock_impl_get_set_opts ...passed 00:08:19.762 Test: posix_sock_impl_get_set_opts ...passed 00:08:19.762 Test: ut_sock_map ...passed 00:08:19.762 Test: override_impl_opts ...passed 00:08:19.762 Test: ut_sock_group_get_ctx ...passed 00:08:19.762 Test: posix_get_interface_name ...passed 00:08:19.762 00:08:19.762 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.762 suites 1 1 n/a 0 0 00:08:19.762 tests 13 13 13 0 0 00:08:19.762 asserts 360 360 360 0 n/a 00:08:19.762 00:08:19.762 Elapsed time = 0.010 seconds 00:08:19.762 00:34:42 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:19.762 00:08:19.762 00:08:19.762 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.762 http://cunit.sourceforge.net/ 00:08:19.762 00:08:19.762 00:08:19.762 Suite: posix 00:08:19.762 Test: flush ...passed 00:08:19.762 00:08:19.762 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.762 suites 1 1 n/a 0 0 00:08:19.762 tests 1 1 1 0 0 00:08:19.762 asserts 28 28 28 0 n/a 00:08:19.762 00:08:19.762 Elapsed time = 0.000 seconds 00:08:19.762 00:34:42 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:19.762 00:08:19.762 real 0m0.139s 00:08:19.762 user 0m0.059s 00:08:19.762 sys 0m0.059s 00:08:19.762 00:34:42 unittest.unittest_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.762 00:34:42 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:08:19.762 ************************************ 00:08:19.762 END TEST unittest_sock 00:08:19.763 ************************************ 00:08:20.021 00:34:42 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:20.021 00:34:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:20.021 00:34:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.021 00:34:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:20.021 ************************************ 00:08:20.021 START TEST unittest_thread 00:08:20.021 ************************************ 00:08:20.021 00:34:42 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:20.021 00:08:20.021 00:08:20.021 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.021 http://cunit.sourceforge.net/ 00:08:20.021 00:08:20.021 00:08:20.021 Suite: io_channel 00:08:20.021 Test: thread_alloc ...passed 00:08:20.021 Test: thread_send_msg ...passed 00:08:20.021 Test: thread_poller ...passed 00:08:20.021 Test: poller_pause ...passed 00:08:20.021 Test: thread_for_each ...passed 00:08:20.021 Test: for_each_channel_remove ...passed 00:08:20.021 Test: for_each_channel_unreg ...[2024-07-25 00:34:42.490274] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x7fffa8258120 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:20.021 passed 00:08:20.021 Test: thread_name ...passed 00:08:20.021 Test: channel ...[2024-07-25 00:34:42.495044] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x555feb296180 00:08:20.021 passed 00:08:20.021 Test: channel_destroy_races ...passed 00:08:20.021 Test: thread_exit_test ...[2024-07-25 00:34:42.500550] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 639:thread_exit: *ERROR*: thread 0x619000007380 got timeout, and move it to the exited state forcefully 00:08:20.021 passed 00:08:20.021 Test: thread_update_stats_test ...passed 00:08:20.021 Test: nested_channel ...passed 00:08:20.021 Test: device_unregister_and_thread_exit_race ...passed 00:08:20.021 Test: cache_closest_timed_poller ...passed 00:08:20.021 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:20.021 Test: io_device_lookup ...passed 00:08:20.021 Test: spdk_spin ...[2024-07-25 00:34:42.512028] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:20.021 [2024-07-25 00:34:42.512225] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fffa8258110 00:08:20.021 [2024-07-25 00:34:42.512442] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:20.021 [2024-07-25 00:34:42.514330] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:20.021 [2024-07-25 00:34:42.514513] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fffa8258110 00:08:20.021 [2024-07-25 00:34:42.514664] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:20.021 [2024-07-25 00:34:42.514826] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fffa8258110 00:08:20.021 [2024-07-25 00:34:42.514961] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:20.021 [2024-07-25 00:34:42.515103] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fffa8258110 00:08:20.021 [2024-07-25 00:34:42.515232] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:20.021 [2024-07-25 00:34:42.515404] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fffa8258110 00:08:20.021 passed 00:08:20.021 Test: for_each_channel_and_thread_exit_race ...passed 00:08:20.021 Test: for_each_thread_and_thread_exit_race ...passed 00:08:20.021 00:08:20.021 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.021 suites 1 1 n/a 0 0 00:08:20.021 tests 20 20 20 0 0 00:08:20.021 asserts 409 409 409 0 n/a 00:08:20.021 00:08:20.021 Elapsed time = 0.052 seconds 00:08:20.021 00:08:20.021 real 0m0.101s 00:08:20.021 user 0m0.068s 00:08:20.021 sys 0m0.032s 00:08:20.021 00:34:42 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.021 00:34:42 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:08:20.021 ************************************ 00:08:20.021 END TEST unittest_thread 00:08:20.021 ************************************ 00:08:20.021 00:34:42 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:20.021 00:34:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:20.021 00:34:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.021 00:34:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:20.021 ************************************ 00:08:20.021 START TEST unittest_iobuf 00:08:20.021 ************************************ 00:08:20.021 00:34:42 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:20.021 00:08:20.021 00:08:20.021 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.021 http://cunit.sourceforge.net/ 00:08:20.021 00:08:20.021 00:08:20.021 Suite: io_channel 00:08:20.021 Test: iobuf ...passed 00:08:20.021 Test: iobuf_cache ...[2024-07-25 00:34:42.638692] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:20.021 [2024-07-25 00:34:42.639455] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:20.021 [2024-07-25 00:34:42.639772] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:20.021 [2024-07-25 00:34:42.639939] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:20.021 [2024-07-25 00:34:42.640135] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:20.022 [2024-07-25 00:34:42.640304] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:20.022 passed 00:08:20.022 Test: iobuf_priority ...passed 00:08:20.022 00:08:20.022 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.022 suites 1 1 n/a 0 0 00:08:20.022 tests 3 3 3 0 0 00:08:20.022 asserts 131 131 131 0 n/a 00:08:20.022 00:08:20.022 Elapsed time = 0.008 seconds 00:08:20.022 00:08:20.022 real 0m0.053s 00:08:20.022 user 0m0.023s 00:08:20.022 sys 0m0.029s 00:08:20.022 00:34:42 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.022 00:34:42 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:08:20.022 ************************************ 00:08:20.022 END TEST unittest_iobuf 00:08:20.022 ************************************ 00:08:20.280 00:34:42 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:08:20.280 00:34:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:20.280 00:34:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.280 00:34:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:20.280 ************************************ 00:08:20.280 START TEST unittest_util 00:08:20.280 ************************************ 00:08:20.280 00:34:42 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:08:20.280 00:34:42 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:20.280 00:08:20.280 00:08:20.280 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.280 http://cunit.sourceforge.net/ 00:08:20.280 00:08:20.280 00:08:20.280 Suite: base64 00:08:20.280 Test: test_base64_get_encoded_strlen ...passed 00:08:20.280 Test: test_base64_get_decoded_len ...passed 00:08:20.280 Test: test_base64_encode ...passed 00:08:20.280 Test: test_base64_decode ...passed 00:08:20.280 Test: test_base64_urlsafe_encode ...passed 00:08:20.280 Test: test_base64_urlsafe_decode ...passed 00:08:20.280 00:08:20.280 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.280 suites 1 1 n/a 0 0 00:08:20.280 tests 6 6 6 0 0 00:08:20.280 asserts 112 112 112 0 n/a 00:08:20.280 00:08:20.280 Elapsed time = 0.000 seconds 00:08:20.280 00:34:42 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:20.280 00:08:20.280 00:08:20.280 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.280 http://cunit.sourceforge.net/ 00:08:20.280 00:08:20.280 00:08:20.280 Suite: bit_array 00:08:20.280 Test: test_1bit ...passed 00:08:20.280 Test: test_64bit ...passed 00:08:20.280 Test: test_find ...passed 00:08:20.280 Test: test_resize ...passed 00:08:20.280 Test: test_errors ...passed 00:08:20.280 Test: test_count ...passed 00:08:20.280 Test: test_mask_store_load ...passed 00:08:20.280 Test: test_mask_clear ...passed 00:08:20.280 00:08:20.280 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.280 suites 1 1 n/a 0 0 00:08:20.280 tests 8 8 8 0 0 00:08:20.280 asserts 5075 5075 5075 0 n/a 00:08:20.280 00:08:20.280 Elapsed time = 0.002 seconds 00:08:20.280 00:34:42 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:20.280 00:08:20.280 00:08:20.280 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.280 http://cunit.sourceforge.net/ 00:08:20.280 00:08:20.280 00:08:20.280 Suite: cpuset 00:08:20.280 Test: test_cpuset ...passed 00:08:20.280 Test: test_cpuset_parse ...[2024-07-25 00:34:42.831926] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:20.280 [2024-07-25 00:34:42.832709] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:20.280 [2024-07-25 00:34:42.832945] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:20.280 [2024-07-25 00:34:42.833165] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:20.280 [2024-07-25 00:34:42.833323] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:20.280 [2024-07-25 00:34:42.833489] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:20.280 [2024-07-25 00:34:42.833652] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:20.281 [2024-07-25 00:34:42.833818] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:20.281 passed 00:08:20.281 Test: test_cpuset_fmt ...passed 00:08:20.281 Test: test_cpuset_foreach ...passed 00:08:20.281 00:08:20.281 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.281 suites 1 1 n/a 0 0 00:08:20.281 tests 4 4 4 0 0 00:08:20.281 asserts 90 90 90 0 n/a 00:08:20.281 00:08:20.281 Elapsed time = 0.003 seconds 00:08:20.281 00:34:42 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:20.281 00:08:20.281 00:08:20.281 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.281 http://cunit.sourceforge.net/ 00:08:20.281 00:08:20.281 00:08:20.281 Suite: crc16 00:08:20.281 Test: test_crc16_t10dif ...passed 00:08:20.281 Test: test_crc16_t10dif_seed ...passed 00:08:20.281 Test: test_crc16_t10dif_copy ...passed 00:08:20.281 00:08:20.281 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.281 suites 1 1 n/a 0 0 00:08:20.281 tests 3 3 3 0 0 00:08:20.281 asserts 5 5 5 0 n/a 00:08:20.281 00:08:20.281 Elapsed time = 0.000 seconds 00:08:20.281 00:34:42 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:20.281 00:08:20.281 00:08:20.281 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.281 http://cunit.sourceforge.net/ 00:08:20.281 00:08:20.281 00:08:20.281 Suite: crc32_ieee 00:08:20.281 Test: test_crc32_ieee ...passed 00:08:20.281 00:08:20.281 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.281 suites 1 1 n/a 0 0 00:08:20.281 tests 1 1 1 0 0 00:08:20.281 asserts 1 1 1 0 n/a 00:08:20.281 00:08:20.281 Elapsed time = 0.000 seconds 00:08:20.281 00:34:42 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:20.541 00:08:20.541 00:08:20.541 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.541 http://cunit.sourceforge.net/ 00:08:20.541 00:08:20.541 00:08:20.541 Suite: crc32c 00:08:20.541 Test: test_crc32c ...passed 00:08:20.541 Test: test_crc32c_nvme ...passed 00:08:20.541 00:08:20.541 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.541 suites 1 1 n/a 0 0 00:08:20.541 tests 2 2 2 0 0 00:08:20.541 asserts 16 16 16 0 n/a 00:08:20.541 00:08:20.541 Elapsed time = 0.000 seconds 00:08:20.541 00:34:42 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:20.541 00:08:20.541 00:08:20.541 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.541 http://cunit.sourceforge.net/ 00:08:20.541 00:08:20.541 00:08:20.541 Suite: crc64 00:08:20.541 Test: test_crc64_nvme ...passed 00:08:20.541 00:08:20.541 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.541 suites 1 1 n/a 0 0 00:08:20.541 tests 1 1 1 0 0 00:08:20.541 asserts 4 4 4 0 n/a 00:08:20.541 00:08:20.541 Elapsed time = 0.000 seconds 00:08:20.541 00:34:43 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:20.541 00:08:20.541 00:08:20.541 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.541 http://cunit.sourceforge.net/ 00:08:20.541 00:08:20.541 00:08:20.541 Suite: string 00:08:20.541 Test: test_parse_ip_addr ...passed 00:08:20.541 Test: test_str_chomp ...passed 00:08:20.541 Test: test_parse_capacity ...passed 00:08:20.541 Test: test_sprintf_append_realloc ...passed 00:08:20.541 Test: test_strtol ...passed 00:08:20.541 Test: test_strtoll ...passed 00:08:20.541 Test: test_strarray ...passed 00:08:20.541 Test: test_strcpy_replace ...passed 00:08:20.541 00:08:20.541 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.541 suites 1 1 n/a 0 0 00:08:20.541 tests 8 8 8 0 0 00:08:20.541 asserts 161 161 161 0 n/a 00:08:20.541 00:08:20.541 Elapsed time = 0.001 seconds 00:08:20.541 00:34:43 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:20.541 00:08:20.541 00:08:20.541 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.541 http://cunit.sourceforge.net/ 00:08:20.541 00:08:20.541 00:08:20.541 Suite: dif 00:08:20.541 Test: dif_generate_and_verify_test ...[2024-07-25 00:34:43.064928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:20.541 [2024-07-25 00:34:43.065475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:20.541 [2024-07-25 00:34:43.065769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:20.541 [2024-07-25 00:34:43.066065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:20.541 [2024-07-25 00:34:43.066433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:20.541 [2024-07-25 00:34:43.066746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:20.541 passed 00:08:20.541 Test: dif_disable_check_test ...[2024-07-25 00:34:43.067788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:20.541 [2024-07-25 00:34:43.068108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:20.541 [2024-07-25 00:34:43.068399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:20.541 passed 00:08:20.541 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-25 00:34:43.069468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:20.541 [2024-07-25 00:34:43.069779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:20.541 [2024-07-25 00:34:43.070107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:20.541 [2024-07-25 00:34:43.070493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:20.541 [2024-07-25 00:34:43.070838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:20.541 [2024-07-25 00:34:43.071163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:20.541 [2024-07-25 00:34:43.071486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:20.541 [2024-07-25 00:34:43.071802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:20.541 [2024-07-25 00:34:43.072122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:20.541 [2024-07-25 00:34:43.072463] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:20.541 [2024-07-25 00:34:43.072797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:20.541 passed 00:08:20.541 Test: dif_apptag_mask_test ...[2024-07-25 00:34:43.073136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:20.541 [2024-07-25 00:34:43.073447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:20.541 passed 00:08:20.541 Test: dif_sec_8_md_8_error_test ...[2024-07-25 00:34:43.073662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:08:20.541 passed 00:08:20.541 Test: dif_sec_512_md_0_error_test ...[2024-07-25 00:34:43.073765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:20.541 passed 00:08:20.541 Test: dif_sec_512_md_16_error_test ...[2024-07-25 00:34:43.073821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:20.541 passed 00:08:20.541 Test: dif_sec_4096_md_0_8_error_test ...[2024-07-25 00:34:43.073878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:20.541 [2024-07-25 00:34:43.073930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:20.541 [2024-07-25 00:34:43.073965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:20.541 passed 00:08:20.541 Test: dif_sec_4100_md_128_error_test ...passed 00:08:20.541 Test: dif_guard_seed_test ...[2024-07-25 00:34:43.074013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:20.541 [2024-07-25 00:34:43.074052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:20.541 [2024-07-25 00:34:43.074109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:20.541 [2024-07-25 00:34:43.074178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:20.541 passed 00:08:20.541 Test: dif_guard_value_test ...passed 00:08:20.541 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:20.541 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:20.541 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:20.541 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:20.541 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:20.541 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:20.541 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:20.541 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:20.541 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:20.541 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:20.541 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:20.541 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:20.541 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:20.541 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:20.541 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:20.541 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:20.541 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:20.541 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:20.542 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 00:34:43.118922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:20.542 [2024-07-25 00:34:43.121386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:08:20.542 [2024-07-25 00:34:43.123842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.126308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.128753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.542 [2024-07-25 00:34:43.131235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.542 [2024-07-25 00:34:43.133677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9d57 00:08:20.542 [2024-07-25 00:34:43.134914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=487b 00:08:20.542 [2024-07-25 00:34:43.136137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:08:20.542 [2024-07-25 00:34:43.138603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:08:20.542 [2024-07-25 00:34:43.141052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.143520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.145968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.542 [2024-07-25 00:34:43.148415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.542 [2024-07-25 00:34:43.150874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e72c9af 00:08:20.542 [2024-07-25 00:34:43.152089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=8385f0d2 00:08:20.542 [2024-07-25 00:34:43.153311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:20.542 [2024-07-25 00:34:43.155791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:08:20.542 [2024-07-25 00:34:43.158244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.160693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.163145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.542 [2024-07-25 00:34:43.165590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.542 [2024-07-25 00:34:43.168058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2c540f6780e86168 00:08:20.542 [2024-07-25 00:34:43.169303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=49180e780d3d476 00:08:20.542 passed 00:08:20.542 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-25 00:34:43.169634] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:20.542 [2024-07-25 00:34:43.169941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:08:20.542 [2024-07-25 00:34:43.170249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.170569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.170867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.542 [2024-07-25 00:34:43.171198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.542 [2024-07-25 00:34:43.171500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9d57 00:08:20.542 [2024-07-25 00:34:43.171803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=487b 00:08:20.542 [2024-07-25 00:34:43.172089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:08:20.542 [2024-07-25 00:34:43.172397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:08:20.542 [2024-07-25 00:34:43.172699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.173023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.173337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.542 [2024-07-25 00:34:43.173657] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.542 [2024-07-25 00:34:43.173950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e72c9af 00:08:20.542 [2024-07-25 00:34:43.174263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=8385f0d2 00:08:20.542 [2024-07-25 00:34:43.174557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:20.542 [2024-07-25 00:34:43.174885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:08:20.542 [2024-07-25 00:34:43.175185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.175488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.175805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.542 [2024-07-25 00:34:43.176109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.542 [2024-07-25 00:34:43.176424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2c540f6780e86168 00:08:20.542 [2024-07-25 00:34:43.176733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=49180e780d3d476 00:08:20.542 passed 00:08:20.542 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-25 00:34:43.177074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:20.542 [2024-07-25 00:34:43.177377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:08:20.542 [2024-07-25 00:34:43.177688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.177993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.178326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.542 [2024-07-25 00:34:43.178633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.542 [2024-07-25 00:34:43.178941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9d57 00:08:20.542 [2024-07-25 00:34:43.179243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=487b 00:08:20.542 [2024-07-25 00:34:43.179541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:08:20.542 [2024-07-25 00:34:43.179844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:08:20.542 [2024-07-25 00:34:43.180151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.180478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.180796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.542 [2024-07-25 00:34:43.181104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.542 [2024-07-25 00:34:43.181417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e72c9af 00:08:20.542 [2024-07-25 00:34:43.181724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=8385f0d2 00:08:20.542 [2024-07-25 00:34:43.182020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:20.542 [2024-07-25 00:34:43.182365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:08:20.542 [2024-07-25 00:34:43.182673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.182975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.542 [2024-07-25 00:34:43.183294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.542 [2024-07-25 00:34:43.183590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.542 [2024-07-25 00:34:43.183885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2c540f6780e86168 00:08:20.542 [2024-07-25 00:34:43.184215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=49180e780d3d476 00:08:20.542 passed 00:08:20.542 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-25 00:34:43.184538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:20.542 [2024-07-25 00:34:43.184858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:08:20.543 [2024-07-25 00:34:43.185176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.543 [2024-07-25 00:34:43.185485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.543 [2024-07-25 00:34:43.185795] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.543 [2024-07-25 00:34:43.186119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.543 [2024-07-25 00:34:43.186450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9d57 00:08:20.543 [2024-07-25 00:34:43.186748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=487b 00:08:20.543 [2024-07-25 00:34:43.187054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:08:20.543 [2024-07-25 00:34:43.187361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:08:20.543 [2024-07-25 00:34:43.187656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.543 [2024-07-25 00:34:43.187974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.543 [2024-07-25 00:34:43.188278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.543 [2024-07-25 00:34:43.188583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.543 [2024-07-25 00:34:43.188901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e72c9af 00:08:20.543 [2024-07-25 00:34:43.189198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=8385f0d2 00:08:20.543 [2024-07-25 00:34:43.189500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:20.543 [2024-07-25 00:34:43.189825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:08:20.543 [2024-07-25 00:34:43.190129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.543 [2024-07-25 00:34:43.190449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.543 [2024-07-25 00:34:43.190758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.543 [2024-07-25 00:34:43.191068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.543 [2024-07-25 00:34:43.191378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2c540f6780e86168 00:08:20.543 [2024-07-25 00:34:43.191704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=49180e780d3d476 00:08:20.543 passed 00:08:20.543 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-25 00:34:43.192034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:20.543 [2024-07-25 00:34:43.192347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:08:20.543 [2024-07-25 00:34:43.192659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.543 [2024-07-25 00:34:43.192968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.193278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.803 [2024-07-25 00:34:43.193596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.803 [2024-07-25 00:34:43.193899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9d57 00:08:20.803 [2024-07-25 00:34:43.194192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=487b 00:08:20.803 passed 00:08:20.803 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-25 00:34:43.194566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:08:20.803 [2024-07-25 00:34:43.194872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:08:20.803 [2024-07-25 00:34:43.195181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.195506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.195819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.803 [2024-07-25 00:34:43.196115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.803 [2024-07-25 00:34:43.196434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e72c9af 00:08:20.803 [2024-07-25 00:34:43.196721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=8385f0d2 00:08:20.803 [2024-07-25 00:34:43.197058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:20.803 [2024-07-25 00:34:43.197372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:08:20.803 [2024-07-25 00:34:43.197675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.197979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.198311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.803 [2024-07-25 00:34:43.198616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.803 [2024-07-25 00:34:43.198927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2c540f6780e86168 00:08:20.803 [2024-07-25 00:34:43.199247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=49180e780d3d476 00:08:20.803 passed 00:08:20.803 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-25 00:34:43.199582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:20.803 [2024-07-25 00:34:43.199896] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:08:20.803 [2024-07-25 00:34:43.200199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.200501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.200806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.803 [2024-07-25 00:34:43.201123] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.803 [2024-07-25 00:34:43.201437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9d57 00:08:20.803 [2024-07-25 00:34:43.201740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=487b 00:08:20.803 passed 00:08:20.803 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-25 00:34:43.202083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:08:20.803 [2024-07-25 00:34:43.202414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:08:20.803 [2024-07-25 00:34:43.202728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.203047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.203356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.803 [2024-07-25 00:34:43.203664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.803 [2024-07-25 00:34:43.203960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e72c9af 00:08:20.803 [2024-07-25 00:34:43.204251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=8385f0d2 00:08:20.803 [2024-07-25 00:34:43.204573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:20.803 [2024-07-25 00:34:43.204896] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:08:20.803 [2024-07-25 00:34:43.205199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.205499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.205809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.803 [2024-07-25 00:34:43.206112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.803 [2024-07-25 00:34:43.206437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2c540f6780e86168 00:08:20.803 [2024-07-25 00:34:43.206750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=49180e780d3d476 00:08:20.803 passed 00:08:20.803 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:20.803 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:20.803 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:20.803 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:20.803 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:20.803 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:20.803 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:20.803 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:20.803 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:20.803 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 00:34:43.251062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:20.803 [2024-07-25 00:34:43.252162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9c9a, Actual=9cda 00:08:20.803 [2024-07-25 00:34:43.253249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.254356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.803 [2024-07-25 00:34:43.255452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.803 [2024-07-25 00:34:43.256547] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.803 [2024-07-25 00:34:43.257658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9d57 00:08:20.803 [2024-07-25 00:34:43.258762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=9b4c 00:08:20.803 [2024-07-25 00:34:43.259846] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:08:20.803 [2024-07-25 00:34:43.260933] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be4c3d3c, Actual=be0c3d3c 00:08:20.803 [2024-07-25 00:34:43.262023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.263126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.264242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.804 [2024-07-25 00:34:43.265325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.804 [2024-07-25 00:34:43.266438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e72c9af 00:08:20.804 [2024-07-25 00:34:43.267531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=5174f61e 00:08:20.804 [2024-07-25 00:34:43.268620] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:20.804 [2024-07-25 00:34:43.269703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=d677919f250f928, Actual=d277919f250f928 00:08:20.804 [2024-07-25 00:34:43.270839] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.271933] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.273016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.804 [2024-07-25 00:34:43.274109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.804 [2024-07-25 00:34:43.275216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2c540f6780e86168 00:08:20.804 [2024-07-25 00:34:43.276307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=b4f6a0d19e3e46b2 00:08:20.804 passed 00:08:20.804 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-25 00:34:43.276704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:20.804 [2024-07-25 00:34:43.276984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9c9a, Actual=9cda 00:08:20.804 [2024-07-25 00:34:43.277263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.277528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.277801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.804 [2024-07-25 00:34:43.278079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.804 [2024-07-25 00:34:43.278362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9d57 00:08:20.804 [2024-07-25 00:34:43.278634] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=9b4c 00:08:20.804 [2024-07-25 00:34:43.278903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:08:20.804 [2024-07-25 00:34:43.279170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be4c3d3c, Actual=be0c3d3c 00:08:20.804 [2024-07-25 00:34:43.279437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.279728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.280005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.804 [2024-07-25 00:34:43.280271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.804 [2024-07-25 00:34:43.280535] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e72c9af 00:08:20.804 [2024-07-25 00:34:43.280821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=5174f61e 00:08:20.804 [2024-07-25 00:34:43.281088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:20.804 [2024-07-25 00:34:43.281370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=d677919f250f928, Actual=d277919f250f928 00:08:20.804 [2024-07-25 00:34:43.281638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.281904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.282174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.804 [2024-07-25 00:34:43.282460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.804 [2024-07-25 00:34:43.282721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2c540f6780e86168 00:08:20.804 [2024-07-25 00:34:43.283017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=b4f6a0d19e3e46b2 00:08:20.804 passed 00:08:20.804 Test: dix_sec_0_md_8_error ...passed 00:08:20.804 Test: dix_sec_512_md_0_error ...[2024-07-25 00:34:43.283089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:08:20.804 passed 00:08:20.804 Test: dix_sec_512_md_16_error ...[2024-07-25 00:34:43.283150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:20.804 [2024-07-25 00:34:43.283199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:20.804 [2024-07-25 00:34:43.283245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:20.804 passed 00:08:20.804 Test: dix_sec_4096_md_0_8_error ...[2024-07-25 00:34:43.283308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:20.804 [2024-07-25 00:34:43.283368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:20.804 [2024-07-25 00:34:43.283407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:20.804 passed 00:08:20.804 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-25 00:34:43.283457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:20.804 passed 00:08:20.804 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:20.804 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:20.804 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:20.804 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:20.804 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:20.804 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:20.804 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:20.804 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:20.804 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 00:34:43.327100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:20.804 [2024-07-25 00:34:43.328211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9c9a, Actual=9cda 00:08:20.804 [2024-07-25 00:34:43.329301] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.330410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.331530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.804 [2024-07-25 00:34:43.332626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.804 [2024-07-25 00:34:43.333719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9d57 00:08:20.804 [2024-07-25 00:34:43.334838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=9b4c 00:08:20.804 [2024-07-25 00:34:43.335952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:08:20.804 [2024-07-25 00:34:43.337039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be4c3d3c, Actual=be0c3d3c 00:08:20.804 [2024-07-25 00:34:43.338131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.339228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.340317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.804 [2024-07-25 00:34:43.341398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.804 [2024-07-25 00:34:43.342508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e72c9af 00:08:20.804 [2024-07-25 00:34:43.343601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=5174f61e 00:08:20.804 [2024-07-25 00:34:43.344709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:20.804 [2024-07-25 00:34:43.345788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=d677919f250f928, Actual=d277919f250f928 00:08:20.804 [2024-07-25 00:34:43.346891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.347973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.804 [2024-07-25 00:34:43.349055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.804 [2024-07-25 00:34:43.350148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.804 [2024-07-25 00:34:43.351278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2c540f6780e86168 00:08:20.804 [2024-07-25 00:34:43.352372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=b4f6a0d19e3e46b2 00:08:20.804 passed 00:08:20.804 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-25 00:34:43.352712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:08:20.805 [2024-07-25 00:34:43.352993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9c9a, Actual=9cda 00:08:20.805 [2024-07-25 00:34:43.353256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.805 [2024-07-25 00:34:43.353540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.805 [2024-07-25 00:34:43.353817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.805 [2024-07-25 00:34:43.354082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:08:20.805 [2024-07-25 00:34:43.354364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9d57 00:08:20.805 [2024-07-25 00:34:43.354644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=9b4c 00:08:20.805 [2024-07-25 00:34:43.354902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:08:20.805 [2024-07-25 00:34:43.355167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be4c3d3c, Actual=be0c3d3c 00:08:20.805 [2024-07-25 00:34:43.355436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.805 [2024-07-25 00:34:43.355707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.805 [2024-07-25 00:34:43.355980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.805 [2024-07-25 00:34:43.356254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:08:20.805 [2024-07-25 00:34:43.356513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8e72c9af 00:08:20.805 [2024-07-25 00:34:43.356777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=5174f61e 00:08:20.805 [2024-07-25 00:34:43.357068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:20.805 [2024-07-25 00:34:43.357327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=d677919f250f928, Actual=d277919f250f928 00:08:20.805 [2024-07-25 00:34:43.357593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.805 [2024-07-25 00:34:43.357854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:08:20.805 [2024-07-25 00:34:43.358120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.805 [2024-07-25 00:34:43.358403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:08:20.805 [2024-07-25 00:34:43.358685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2c540f6780e86168 00:08:20.805 [2024-07-25 00:34:43.358951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=b4f6a0d19e3e46b2 00:08:20.805 passed 00:08:20.805 Test: set_md_interleave_iovs_test ...passed 00:08:20.805 Test: set_md_interleave_iovs_split_test ...passed 00:08:20.805 Test: dif_generate_stream_pi_16_test ...passed 00:08:20.805 Test: dif_generate_stream_test ...passed 00:08:20.805 Test: set_md_interleave_iovs_alignment_test ...[2024-07-25 00:34:43.366668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1857:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:20.805 passed 00:08:20.805 Test: dif_generate_split_test ...passed 00:08:20.805 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:20.805 Test: dif_verify_split_test ...passed 00:08:20.805 Test: dif_verify_stream_multi_segments_test ...passed 00:08:20.805 Test: update_crc32c_pi_16_test ...passed 00:08:20.805 Test: update_crc32c_test ...passed 00:08:20.805 Test: dif_update_crc32c_split_test ...passed 00:08:20.805 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:20.805 Test: get_range_with_md_test ...passed 00:08:20.805 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:20.805 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:20.805 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:20.805 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:20.805 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:20.805 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:20.805 Test: dif_generate_and_verify_unmap_test ...passed 00:08:20.805 Test: dif_pi_format_check_test ...passed 00:08:20.805 Test: dif_type_check_test ...passed 00:08:20.805 00:08:20.805 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.805 suites 1 1 n/a 0 0 00:08:20.805 tests 86 86 86 0 0 00:08:20.805 asserts 3605 3605 3605 0 n/a 00:08:20.805 00:08:20.805 Elapsed time = 0.348 seconds 00:08:20.805 00:34:43 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:20.805 00:08:20.805 00:08:20.805 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.805 http://cunit.sourceforge.net/ 00:08:20.805 00:08:20.805 00:08:20.805 Suite: iov 00:08:20.805 Test: test_single_iov ...passed 00:08:20.805 Test: test_simple_iov ...passed 00:08:20.805 Test: test_complex_iov ...passed 00:08:20.805 Test: test_iovs_to_buf ...passed 00:08:20.805 Test: test_buf_to_iovs ...passed 00:08:20.805 Test: test_memset ...passed 00:08:20.805 Test: test_iov_one ...passed 00:08:20.805 Test: test_iov_xfer ...passed 00:08:20.805 00:08:20.805 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.805 suites 1 1 n/a 0 0 00:08:20.805 tests 8 8 8 0 0 00:08:20.805 asserts 156 156 156 0 n/a 00:08:20.805 00:08:20.805 Elapsed time = 0.000 seconds 00:08:21.062 00:34:43 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:21.062 00:08:21.062 00:08:21.062 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.062 http://cunit.sourceforge.net/ 00:08:21.062 00:08:21.062 00:08:21.062 Suite: math 00:08:21.062 Test: test_serial_number_arithmetic ...passed 00:08:21.062 Suite: erase 00:08:21.062 Test: test_memset_s ...passed 00:08:21.062 00:08:21.062 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.062 suites 2 2 n/a 0 0 00:08:21.062 tests 2 2 2 0 0 00:08:21.062 asserts 18 18 18 0 n/a 00:08:21.062 00:08:21.063 Elapsed time = 0.000 seconds 00:08:21.063 00:34:43 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:21.063 00:08:21.063 00:08:21.063 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.063 http://cunit.sourceforge.net/ 00:08:21.063 00:08:21.063 00:08:21.063 Suite: pipe 00:08:21.063 Test: test_create_destroy ...passed 00:08:21.063 Test: test_write_get_buffer ...passed 00:08:21.063 Test: test_write_advance ...passed 00:08:21.063 Test: test_read_get_buffer ...passed 00:08:21.063 Test: test_read_advance ...passed 00:08:21.063 Test: test_data ...passed 00:08:21.063 00:08:21.063 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.063 suites 1 1 n/a 0 0 00:08:21.063 tests 6 6 6 0 0 00:08:21.063 asserts 251 251 251 0 n/a 00:08:21.063 00:08:21.063 Elapsed time = 0.000 seconds 00:08:21.063 00:34:43 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:21.063 00:08:21.063 00:08:21.063 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.063 http://cunit.sourceforge.net/ 00:08:21.063 00:08:21.063 00:08:21.063 Suite: xor 00:08:21.063 Test: test_xor_gen ...passed 00:08:21.063 00:08:21.063 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.063 suites 1 1 n/a 0 0 00:08:21.063 tests 1 1 1 0 0 00:08:21.063 asserts 17 17 17 0 n/a 00:08:21.063 00:08:21.063 Elapsed time = 0.007 seconds 00:08:21.063 00:08:21.063 real 0m0.866s 00:08:21.063 user 0m0.601s 00:08:21.063 sys 0m0.268s 00:08:21.063 00:34:43 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.063 00:34:43 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:08:21.063 ************************************ 00:08:21.063 END TEST unittest_util 00:08:21.063 ************************************ 00:08:21.063 00:34:43 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:21.063 00:34:43 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:21.063 00:34:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.063 00:34:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.063 00:34:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:21.063 ************************************ 00:08:21.063 START TEST unittest_vhost 00:08:21.063 ************************************ 00:08:21.063 00:34:43 unittest.unittest_vhost -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:21.063 00:08:21.063 00:08:21.063 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.063 http://cunit.sourceforge.net/ 00:08:21.063 00:08:21.063 00:08:21.063 Suite: vhost_suite 00:08:21.063 Test: desc_to_iov_test ...[2024-07-25 00:34:43.702485] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:21.063 passed 00:08:21.063 Test: create_controller_test ...[2024-07-25 00:34:43.709598] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:21.063 [2024-07-25 00:34:43.709969] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:21.063 [2024-07-25 00:34:43.710451] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:21.063 [2024-07-25 00:34:43.710758] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:21.063 [2024-07-25 00:34:43.711037] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:21.063 [2024-07-25 00:34:43.711770] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:08:21.063 [2024-07-25 00:34:43.713791] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:21.063 passed 00:08:21.321 Test: session_find_by_vid_test ...passed 00:08:21.321 Test: remove_controller_test ...[2024-07-25 00:34:43.717592] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:21.321 passed 00:08:21.321 Test: vq_avail_ring_get_test ...passed 00:08:21.321 Test: vq_packed_ring_test ...passed 00:08:21.321 Test: vhost_blk_construct_test ...passed 00:08:21.321 00:08:21.321 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.321 suites 1 1 n/a 0 0 00:08:21.321 tests 7 7 7 0 0 00:08:21.321 asserts 147 147 147 0 n/a 00:08:21.321 00:08:21.321 Elapsed time = 0.019 seconds 00:08:21.321 00:08:21.321 real 0m0.080s 00:08:21.321 user 0m0.035s 00:08:21.321 sys 0m0.043s 00:08:21.321 00:34:43 unittest.unittest_vhost -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.321 00:34:43 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:08:21.321 ************************************ 00:08:21.321 END TEST unittest_vhost 00:08:21.321 ************************************ 00:08:21.321 00:34:43 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:21.321 00:34:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.321 00:34:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.321 00:34:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:21.321 ************************************ 00:08:21.321 START TEST unittest_dma 00:08:21.321 ************************************ 00:08:21.321 00:34:43 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:21.321 00:08:21.321 00:08:21.321 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.321 http://cunit.sourceforge.net/ 00:08:21.321 00:08:21.321 00:08:21.321 Suite: dma_suite 00:08:21.321 Test: test_dma ...[2024-07-25 00:34:43.831058] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:21.321 passed 00:08:21.321 00:08:21.321 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.321 suites 1 1 n/a 0 0 00:08:21.321 tests 1 1 1 0 0 00:08:21.321 asserts 54 54 54 0 n/a 00:08:21.321 00:08:21.321 Elapsed time = 0.001 seconds 00:08:21.321 00:08:21.321 real 0m0.041s 00:08:21.321 user 0m0.024s 00:08:21.321 sys 0m0.017s 00:08:21.321 00:34:43 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.321 00:34:43 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:08:21.321 ************************************ 00:08:21.321 END TEST unittest_dma 00:08:21.321 ************************************ 00:08:21.321 00:34:43 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:08:21.321 00:34:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.321 00:34:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.321 00:34:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:21.321 ************************************ 00:08:21.321 START TEST unittest_init 00:08:21.321 ************************************ 00:08:21.321 00:34:43 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:08:21.321 00:34:43 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:21.321 00:08:21.321 00:08:21.321 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.321 http://cunit.sourceforge.net/ 00:08:21.321 00:08:21.321 00:08:21.321 Suite: subsystem_suite 00:08:21.321 Test: subsystem_sort_test_depends_on_single ...passed 00:08:21.321 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:21.321 Test: subsystem_sort_test_missing_dependency ...[2024-07-25 00:34:43.941345] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:21.321 passed 00:08:21.321 00:08:21.321 [2024-07-25 00:34:43.941699] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:21.321 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.321 suites 1 1 n/a 0 0 00:08:21.321 tests 3 3 3 0 0 00:08:21.321 asserts 20 20 20 0 n/a 00:08:21.321 00:08:21.321 Elapsed time = 0.001 seconds 00:08:21.321 00:08:21.321 real 0m0.043s 00:08:21.321 user 0m0.014s 00:08:21.321 sys 0m0.029s 00:08:21.321 00:34:43 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.321 00:34:43 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:08:21.321 ************************************ 00:08:21.321 END TEST unittest_init 00:08:21.321 ************************************ 00:08:21.579 00:34:44 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:21.579 00:34:44 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.579 00:34:44 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.579 00:34:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:21.579 ************************************ 00:08:21.579 START TEST unittest_keyring 00:08:21.579 ************************************ 00:08:21.579 00:34:44 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:21.579 00:08:21.579 00:08:21.579 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.579 http://cunit.sourceforge.net/ 00:08:21.579 00:08:21.579 00:08:21.579 Suite: keyring 00:08:21.579 Test: test_keyring_add_remove ...[2024-07-25 00:34:44.053426] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:08:21.579 [2024-07-25 00:34:44.053771] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:08:21.579 passed 00:08:21.579 Test: test_keyring_get_put ...[2024-07-25 00:34:44.053866] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:08:21.579 passed 00:08:21.579 00:08:21.579 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.579 suites 1 1 n/a 0 0 00:08:21.579 tests 2 2 2 0 0 00:08:21.579 asserts 44 44 44 0 n/a 00:08:21.579 00:08:21.579 Elapsed time = 0.001 seconds 00:08:21.579 00:08:21.579 real 0m0.042s 00:08:21.579 user 0m0.022s 00:08:21.579 sys 0m0.020s 00:08:21.579 00:34:44 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.579 00:34:44 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:08:21.579 ************************************ 00:08:21.579 END TEST unittest_keyring 00:08:21.579 ************************************ 00:08:21.579 00:34:44 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:08:21.579 00:34:44 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:21.579 00:34:44 unittest -- unit/unittest.sh@293 -- # hostname 00:08:21.579 00:34:44 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:21.874 geninfo: WARNING: invalid characters removed from testname! 00:08:53.934 00:35:11 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:53.934 00:35:16 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:57.213 00:35:19 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:59.738 00:35:21 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:02.264 00:35:24 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:04.789 00:35:27 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:07.318 00:35:29 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:09.847 00:35:31 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:09.847 00:35:31 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:10.105 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:10.105 Found 322 entries. 00:09:10.105 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:09:10.105 Writing .css and .png files. 00:09:10.105 Generating output. 00:09:10.105 Processing file include/linux/virtio_ring.h 00:09:10.671 Processing file include/spdk/bdev_module.h 00:09:10.671 Processing file include/spdk/histogram_data.h 00:09:10.671 Processing file include/spdk/thread.h 00:09:10.671 Processing file include/spdk/util.h 00:09:10.671 Processing file include/spdk/nvme_spec.h 00:09:10.671 Processing file include/spdk/mmio.h 00:09:10.671 Processing file include/spdk/endian.h 00:09:10.671 Processing file include/spdk/nvme.h 00:09:10.671 Processing file include/spdk/nvmf_transport.h 00:09:10.671 Processing file include/spdk/base64.h 00:09:10.671 Processing file include/spdk/trace.h 00:09:10.671 Processing file include/spdk_internal/utf.h 00:09:10.671 Processing file include/spdk_internal/virtio.h 00:09:10.671 Processing file include/spdk_internal/sgl.h 00:09:10.671 Processing file include/spdk_internal/nvme_tcp.h 00:09:10.671 Processing file include/spdk_internal/sock.h 00:09:10.671 Processing file include/spdk_internal/rdma_utils.h 00:09:10.671 Processing file lib/accel/accel.c 00:09:10.671 Processing file lib/accel/accel_sw.c 00:09:10.671 Processing file lib/accel/accel_rpc.c 00:09:10.931 Processing file lib/bdev/bdev.c 00:09:10.931 Processing file lib/bdev/bdev_zone.c 00:09:10.931 Processing file lib/bdev/scsi_nvme.c 00:09:10.931 Processing file lib/bdev/bdev_rpc.c 00:09:10.931 Processing file lib/bdev/part.c 00:09:11.188 Processing file lib/blob/blob_bs_dev.c 00:09:11.188 Processing file lib/blob/request.c 00:09:11.188 Processing file lib/blob/blobstore.c 00:09:11.188 Processing file lib/blob/blobstore.h 00:09:11.188 Processing file lib/blob/zeroes.c 00:09:11.445 Processing file lib/blobfs/tree.c 00:09:11.445 Processing file lib/blobfs/blobfs.c 00:09:11.445 Processing file lib/conf/conf.c 00:09:11.445 Processing file lib/dma/dma.c 00:09:12.009 Processing file lib/env_dpdk/pci_virtio.c 00:09:12.009 Processing file lib/env_dpdk/memory.c 00:09:12.009 Processing file lib/env_dpdk/sigbus_handler.c 00:09:12.009 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:09:12.009 Processing file lib/env_dpdk/pci_ioat.c 00:09:12.009 Processing file lib/env_dpdk/pci.c 00:09:12.009 Processing file lib/env_dpdk/threads.c 00:09:12.009 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:09:12.009 Processing file lib/env_dpdk/init.c 00:09:12.009 Processing file lib/env_dpdk/pci_vmd.c 00:09:12.009 Processing file lib/env_dpdk/pci_idxd.c 00:09:12.009 Processing file lib/env_dpdk/pci_event.c 00:09:12.009 Processing file lib/env_dpdk/pci_dpdk.c 00:09:12.009 Processing file lib/env_dpdk/env.c 00:09:12.009 Processing file lib/event/log_rpc.c 00:09:12.009 Processing file lib/event/app_rpc.c 00:09:12.009 Processing file lib/event/app.c 00:09:12.009 Processing file lib/event/scheduler_static.c 00:09:12.009 Processing file lib/event/reactor.c 00:09:12.573 Processing file lib/ftl/ftl_layout.c 00:09:12.573 Processing file lib/ftl/ftl_reloc.c 00:09:12.573 Processing file lib/ftl/ftl_core.h 00:09:12.573 Processing file lib/ftl/ftl_band_ops.c 00:09:12.573 Processing file lib/ftl/ftl_trace.c 00:09:12.573 Processing file lib/ftl/ftl_io.c 00:09:12.573 Processing file lib/ftl/ftl_core.c 00:09:12.573 Processing file lib/ftl/ftl_rq.c 00:09:12.573 Processing file lib/ftl/ftl_nv_cache_io.h 00:09:12.573 Processing file lib/ftl/ftl_band.c 00:09:12.573 Processing file lib/ftl/ftl_l2p.c 00:09:12.573 Processing file lib/ftl/ftl_sb.c 00:09:12.573 Processing file lib/ftl/ftl_l2p_flat.c 00:09:12.573 Processing file lib/ftl/ftl_nv_cache.h 00:09:12.573 Processing file lib/ftl/ftl_debug.c 00:09:12.573 Processing file lib/ftl/ftl_writer.c 00:09:12.573 Processing file lib/ftl/ftl_io.h 00:09:12.573 Processing file lib/ftl/ftl_l2p_cache.c 00:09:12.573 Processing file lib/ftl/ftl_init.c 00:09:12.573 Processing file lib/ftl/ftl_debug.h 00:09:12.573 Processing file lib/ftl/ftl_writer.h 00:09:12.573 Processing file lib/ftl/ftl_nv_cache.c 00:09:12.573 Processing file lib/ftl/ftl_band.h 00:09:12.573 Processing file lib/ftl/ftl_p2l.c 00:09:12.573 Processing file lib/ftl/base/ftl_base_dev.c 00:09:12.573 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:12.831 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:12.831 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:12.831 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:12.831 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:09:12.831 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:12.831 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:12.831 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:12.831 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:09:12.831 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:09:12.831 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:09:12.831 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:13.088 Processing file lib/ftl/utils/ftl_md.c 00:09:13.088 Processing file lib/ftl/utils/ftl_property.h 00:09:13.088 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:13.088 Processing file lib/ftl/utils/ftl_mempool.c 00:09:13.088 Processing file lib/ftl/utils/ftl_property.c 00:09:13.088 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:13.088 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:13.088 Processing file lib/ftl/utils/ftl_df.h 00:09:13.088 Processing file lib/ftl/utils/ftl_conf.c 00:09:13.347 Processing file lib/idxd/idxd_internal.h 00:09:13.347 Processing file lib/idxd/idxd_user.c 00:09:13.347 Processing file lib/idxd/idxd.c 00:09:13.347 Processing file lib/init/subsystem_rpc.c 00:09:13.347 Processing file lib/init/json_config.c 00:09:13.347 Processing file lib/init/rpc.c 00:09:13.347 Processing file lib/init/subsystem.c 00:09:13.347 Processing file lib/ioat/ioat.c 00:09:13.347 Processing file lib/ioat/ioat_internal.h 00:09:13.915 Processing file lib/iscsi/iscsi_rpc.c 00:09:13.915 Processing file lib/iscsi/tgt_node.c 00:09:13.915 Processing file lib/iscsi/md5.c 00:09:13.915 Processing file lib/iscsi/iscsi.c 00:09:13.915 Processing file lib/iscsi/iscsi_subsystem.c 00:09:13.915 Processing file lib/iscsi/init_grp.c 00:09:13.915 Processing file lib/iscsi/iscsi.h 00:09:13.915 Processing file lib/iscsi/portal_grp.c 00:09:13.915 Processing file lib/iscsi/task.c 00:09:13.915 Processing file lib/iscsi/conn.c 00:09:13.915 Processing file lib/iscsi/task.h 00:09:13.915 Processing file lib/iscsi/param.c 00:09:13.915 Processing file lib/json/json_util.c 00:09:13.915 Processing file lib/json/json_write.c 00:09:13.915 Processing file lib/json/json_parse.c 00:09:13.915 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:13.915 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:13.915 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:13.915 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:14.197 Processing file lib/keyring/keyring.c 00:09:14.197 Processing file lib/keyring/keyring_rpc.c 00:09:14.197 Processing file lib/log/log_deprecated.c 00:09:14.197 Processing file lib/log/log_flags.c 00:09:14.197 Processing file lib/log/log.c 00:09:14.197 Processing file lib/lvol/lvol.c 00:09:14.470 Processing file lib/nbd/nbd.c 00:09:14.470 Processing file lib/nbd/nbd_rpc.c 00:09:14.470 Processing file lib/notify/notify_rpc.c 00:09:14.470 Processing file lib/notify/notify.c 00:09:15.038 Processing file lib/nvme/nvme_fabric.c 00:09:15.038 Processing file lib/nvme/nvme.c 00:09:15.038 Processing file lib/nvme/nvme_pcie_internal.h 00:09:15.038 Processing file lib/nvme/nvme_ns_cmd.c 00:09:15.038 Processing file lib/nvme/nvme_cuse.c 00:09:15.038 Processing file lib/nvme/nvme_discovery.c 00:09:15.038 Processing file lib/nvme/nvme_poll_group.c 00:09:15.038 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:15.038 Processing file lib/nvme/nvme_internal.h 00:09:15.038 Processing file lib/nvme/nvme_quirks.c 00:09:15.038 Processing file lib/nvme/nvme_opal.c 00:09:15.038 Processing file lib/nvme/nvme_io_msg.c 00:09:15.038 Processing file lib/nvme/nvme_ctrlr.c 00:09:15.038 Processing file lib/nvme/nvme_tcp.c 00:09:15.038 Processing file lib/nvme/nvme_auth.c 00:09:15.038 Processing file lib/nvme/nvme_pcie_common.c 00:09:15.038 Processing file lib/nvme/nvme_zns.c 00:09:15.038 Processing file lib/nvme/nvme_pcie.c 00:09:15.038 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:15.038 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:15.038 Processing file lib/nvme/nvme_transport.c 00:09:15.038 Processing file lib/nvme/nvme_ns.c 00:09:15.038 Processing file lib/nvme/nvme_rdma.c 00:09:15.038 Processing file lib/nvme/nvme_qpair.c 00:09:15.607 Processing file lib/nvmf/nvmf.c 00:09:15.607 Processing file lib/nvmf/ctrlr.c 00:09:15.607 Processing file lib/nvmf/transport.c 00:09:15.607 Processing file lib/nvmf/nvmf_rpc.c 00:09:15.607 Processing file lib/nvmf/ctrlr_discovery.c 00:09:15.607 Processing file lib/nvmf/rdma.c 00:09:15.607 Processing file lib/nvmf/auth.c 00:09:15.607 Processing file lib/nvmf/tcp.c 00:09:15.607 Processing file lib/nvmf/ctrlr_bdev.c 00:09:15.607 Processing file lib/nvmf/nvmf_internal.h 00:09:15.607 Processing file lib/nvmf/subsystem.c 00:09:15.607 Processing file lib/rdma_provider/rdma_provider_verbs.c 00:09:15.607 Processing file lib/rdma_provider/common.c 00:09:15.607 Processing file lib/rdma_utils/rdma_utils.c 00:09:15.866 Processing file lib/rpc/rpc.c 00:09:15.866 Processing file lib/scsi/scsi_pr.c 00:09:15.866 Processing file lib/scsi/scsi_bdev.c 00:09:15.866 Processing file lib/scsi/lun.c 00:09:15.866 Processing file lib/scsi/task.c 00:09:15.866 Processing file lib/scsi/dev.c 00:09:15.866 Processing file lib/scsi/port.c 00:09:15.866 Processing file lib/scsi/scsi_rpc.c 00:09:15.866 Processing file lib/scsi/scsi.c 00:09:16.124 Processing file lib/sock/sock.c 00:09:16.124 Processing file lib/sock/sock_rpc.c 00:09:16.124 Processing file lib/thread/thread.c 00:09:16.124 Processing file lib/thread/iobuf.c 00:09:16.383 Processing file lib/trace/trace.c 00:09:16.383 Processing file lib/trace/trace_flags.c 00:09:16.383 Processing file lib/trace/trace_rpc.c 00:09:16.383 Processing file lib/trace_parser/trace.cpp 00:09:16.383 Processing file lib/ut/ut.c 00:09:16.383 Processing file lib/ut_mock/mock.c 00:09:16.951 Processing file lib/util/bit_array.c 00:09:16.951 Processing file lib/util/fd_group.c 00:09:16.951 Processing file lib/util/fd.c 00:09:16.951 Processing file lib/util/iov.c 00:09:16.951 Processing file lib/util/string.c 00:09:16.951 Processing file lib/util/base64.c 00:09:16.951 Processing file lib/util/strerror_tls.c 00:09:16.951 Processing file lib/util/crc64.c 00:09:16.951 Processing file lib/util/zipf.c 00:09:16.951 Processing file lib/util/net.c 00:09:16.951 Processing file lib/util/crc32_ieee.c 00:09:16.951 Processing file lib/util/crc32.c 00:09:16.951 Processing file lib/util/pipe.c 00:09:16.951 Processing file lib/util/math.c 00:09:16.951 Processing file lib/util/dif.c 00:09:16.951 Processing file lib/util/crc32c.c 00:09:16.951 Processing file lib/util/hexlify.c 00:09:16.951 Processing file lib/util/uuid.c 00:09:16.951 Processing file lib/util/xor.c 00:09:16.951 Processing file lib/util/cpuset.c 00:09:16.951 Processing file lib/util/file.c 00:09:16.951 Processing file lib/util/crc16.c 00:09:16.951 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:16.951 Processing file lib/vfio_user/host/vfio_user.c 00:09:16.951 Processing file lib/vhost/vhost_scsi.c 00:09:16.951 Processing file lib/vhost/rte_vhost_user.c 00:09:16.951 Processing file lib/vhost/vhost_internal.h 00:09:16.951 Processing file lib/vhost/vhost.c 00:09:16.951 Processing file lib/vhost/vhost_blk.c 00:09:16.951 Processing file lib/vhost/vhost_rpc.c 00:09:17.209 Processing file lib/virtio/virtio_pci.c 00:09:17.209 Processing file lib/virtio/virtio.c 00:09:17.209 Processing file lib/virtio/virtio_vhost_user.c 00:09:17.209 Processing file lib/virtio/virtio_vfio_user.c 00:09:17.209 Processing file lib/vmd/led.c 00:09:17.209 Processing file lib/vmd/vmd.c 00:09:17.467 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:17.467 Processing file module/accel/dsa/accel_dsa.c 00:09:17.467 Processing file module/accel/error/accel_error.c 00:09:17.467 Processing file module/accel/error/accel_error_rpc.c 00:09:17.467 Processing file module/accel/iaa/accel_iaa.c 00:09:17.467 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:17.467 Processing file module/accel/ioat/accel_ioat.c 00:09:17.467 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:17.725 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:17.725 Processing file module/bdev/aio/bdev_aio.c 00:09:17.725 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:17.725 Processing file module/bdev/delay/vbdev_delay.c 00:09:17.725 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:17.725 Processing file module/bdev/error/vbdev_error.c 00:09:17.983 Processing file module/bdev/ftl/bdev_ftl.c 00:09:17.983 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:17.983 Processing file module/bdev/gpt/gpt.h 00:09:17.983 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:17.983 Processing file module/bdev/gpt/gpt.c 00:09:17.983 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:17.983 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:18.241 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:18.241 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:18.241 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:18.241 Processing file module/bdev/malloc/bdev_malloc.c 00:09:18.241 Processing file module/bdev/null/bdev_null.c 00:09:18.241 Processing file module/bdev/null/bdev_null_rpc.c 00:09:18.499 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:18.499 Processing file module/bdev/nvme/bdev_nvme.c 00:09:18.499 Processing file module/bdev/nvme/nvme_rpc.c 00:09:18.499 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:18.499 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:18.499 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:18.499 Processing file module/bdev/nvme/vbdev_opal.c 00:09:18.757 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:18.757 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:18.757 Processing file module/bdev/raid/raid5f.c 00:09:18.757 Processing file module/bdev/raid/raid1.c 00:09:18.757 Processing file module/bdev/raid/concat.c 00:09:18.757 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:18.757 Processing file module/bdev/raid/bdev_raid.h 00:09:18.757 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:18.757 Processing file module/bdev/raid/bdev_raid.c 00:09:18.757 Processing file module/bdev/raid/raid0.c 00:09:19.015 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:19.015 Processing file module/bdev/split/vbdev_split.c 00:09:19.015 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:19.015 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:19.015 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:19.015 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:19.015 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:19.272 Processing file module/blob/bdev/blob_bdev.c 00:09:19.272 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:19.272 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:19.272 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:19.272 Processing file module/event/subsystems/accel/accel.c 00:09:19.530 Processing file module/event/subsystems/bdev/bdev.c 00:09:19.530 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:19.530 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:19.530 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:19.530 Processing file module/event/subsystems/keyring/keyring.c 00:09:19.788 Processing file module/event/subsystems/nbd/nbd.c 00:09:19.788 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:19.788 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:19.788 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:19.788 Processing file module/event/subsystems/scsi/scsi.c 00:09:20.045 Processing file module/event/subsystems/sock/sock.c 00:09:20.045 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:20.045 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:20.045 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:20.045 Processing file module/event/subsystems/vmd/vmd.c 00:09:20.303 Processing file module/keyring/file/keyring_rpc.c 00:09:20.303 Processing file module/keyring/file/keyring.c 00:09:20.303 Processing file module/keyring/linux/keyring_rpc.c 00:09:20.303 Processing file module/keyring/linux/keyring.c 00:09:20.303 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:20.303 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:20.561 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:20.561 Processing file module/sock/posix/posix.c 00:09:20.561 Writing directory view page. 00:09:20.561 Overall coverage rate: 00:09:20.561 lines......: 38.7% (41085 of 106132 lines) 00:09:20.561 functions..: 42.4% (3741 of 8831 functions) 00:09:20.561 00:09:20.561 00:09:20.561 ===================== 00:09:20.561 All unit tests passed 00:09:20.561 ===================== 00:09:20.561 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:20.561 00:35:43 unittest -- unit/unittest.sh@305 -- # set +x 00:09:20.561 00:09:20.561 00:09:20.561 00:09:20.561 real 3m52.161s 00:09:20.561 user 3m19.981s 00:09:20.561 sys 0m22.718s 00:09:20.561 00:35:43 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.561 00:35:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:20.561 ************************************ 00:09:20.561 END TEST unittest 00:09:20.561 ************************************ 00:09:20.561 00:35:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:20.561 00:35:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:20.561 00:35:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:20.561 00:35:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:20.561 00:35:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.561 00:35:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.561 00:35:43 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:09:20.561 00:35:43 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:20.561 00:35:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.562 00:35:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.562 00:35:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.562 ************************************ 00:09:20.562 START TEST env 00:09:20.562 ************************************ 00:09:20.562 00:35:43 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:20.821 * Looking for test storage... 00:09:20.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:20.821 00:35:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:20.821 00:35:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.821 00:35:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.821 00:35:43 env -- common/autotest_common.sh@10 -- # set +x 00:09:20.821 ************************************ 00:09:20.821 START TEST env_memory 00:09:20.821 ************************************ 00:09:20.821 00:35:43 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:20.821 00:09:20.821 00:09:20.821 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.821 http://cunit.sourceforge.net/ 00:09:20.821 00:09:20.821 00:09:20.821 Suite: memory 00:09:20.821 Test: alloc and free memory map ...[2024-07-25 00:35:43.387780] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:20.821 passed 00:09:20.821 Test: mem map translation ...[2024-07-25 00:35:43.443737] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:20.821 [2024-07-25 00:35:43.443880] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:20.821 [2024-07-25 00:35:43.444058] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:20.821 [2024-07-25 00:35:43.444197] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:21.080 passed 00:09:21.080 Test: mem map registration ...[2024-07-25 00:35:43.534981] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:21.080 [2024-07-25 00:35:43.535114] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:21.080 passed 00:09:21.080 Test: mem map adjacent registrations ...passed 00:09:21.080 00:09:21.080 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.080 suites 1 1 n/a 0 0 00:09:21.080 tests 4 4 4 0 0 00:09:21.080 asserts 152 152 152 0 n/a 00:09:21.080 00:09:21.080 Elapsed time = 0.317 seconds 00:09:21.080 00:09:21.080 real 0m0.359s 00:09:21.080 user 0m0.338s 00:09:21.080 sys 0m0.021s 00:09:21.080 00:35:43 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.080 00:35:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:21.080 ************************************ 00:09:21.080 END TEST env_memory 00:09:21.080 ************************************ 00:09:21.080 00:35:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:21.080 00:35:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.080 00:35:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.080 00:35:43 env -- common/autotest_common.sh@10 -- # set +x 00:09:21.339 ************************************ 00:09:21.339 START TEST env_vtophys 00:09:21.339 ************************************ 00:09:21.339 00:35:43 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:21.339 EAL: lib.eal log level changed from notice to debug 00:09:21.339 EAL: Detected lcore 0 as core 0 on socket 0 00:09:21.339 EAL: Detected lcore 1 as core 0 on socket 0 00:09:21.339 EAL: Detected lcore 2 as core 0 on socket 0 00:09:21.339 EAL: Detected lcore 3 as core 0 on socket 0 00:09:21.339 EAL: Detected lcore 4 as core 0 on socket 0 00:09:21.339 EAL: Detected lcore 5 as core 0 on socket 0 00:09:21.339 EAL: Detected lcore 6 as core 0 on socket 0 00:09:21.339 EAL: Detected lcore 7 as core 0 on socket 0 00:09:21.339 EAL: Detected lcore 8 as core 0 on socket 0 00:09:21.339 EAL: Detected lcore 9 as core 0 on socket 0 00:09:21.339 EAL: Maximum logical cores by configuration: 128 00:09:21.339 EAL: Detected CPU lcores: 10 00:09:21.339 EAL: Detected NUMA nodes: 1 00:09:21.339 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:21.339 EAL: Checking presence of .so 'librte_eal.so.24' 00:09:21.339 EAL: Checking presence of .so 'librte_eal.so' 00:09:21.339 EAL: Detected static linkage of DPDK 00:09:21.339 EAL: No shared files mode enabled, IPC will be disabled 00:09:21.339 EAL: Selected IOVA mode 'PA' 00:09:21.339 EAL: Probing VFIO support... 00:09:21.339 EAL: IOMMU type 1 (Type 1) is supported 00:09:21.339 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:21.339 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:21.339 EAL: VFIO support initialized 00:09:21.339 EAL: Ask a virtual area of 0x2e000 bytes 00:09:21.339 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:21.339 EAL: Setting up physically contiguous memory... 00:09:21.339 EAL: Setting maximum number of open files to 1048576 00:09:21.339 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:21.339 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:21.339 EAL: Ask a virtual area of 0x61000 bytes 00:09:21.339 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:21.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:21.339 EAL: Ask a virtual area of 0x400000000 bytes 00:09:21.339 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:21.339 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:21.339 EAL: Ask a virtual area of 0x61000 bytes 00:09:21.339 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:21.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:21.339 EAL: Ask a virtual area of 0x400000000 bytes 00:09:21.339 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:21.339 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:21.339 EAL: Ask a virtual area of 0x61000 bytes 00:09:21.339 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:21.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:21.339 EAL: Ask a virtual area of 0x400000000 bytes 00:09:21.339 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:21.339 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:21.339 EAL: Ask a virtual area of 0x61000 bytes 00:09:21.339 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:21.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:21.339 EAL: Ask a virtual area of 0x400000000 bytes 00:09:21.339 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:21.339 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:21.339 EAL: Hugepages will be freed exactly as allocated. 00:09:21.339 EAL: No shared files mode enabled, IPC is disabled 00:09:21.339 EAL: No shared files mode enabled, IPC is disabled 00:09:21.339 EAL: TSC frequency is ~2100000 KHz 00:09:21.339 EAL: Main lcore 0 is ready (tid=7f7c53cd6a80;cpuset=[0]) 00:09:21.339 EAL: Trying to obtain current memory policy. 00:09:21.339 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.339 EAL: Restoring previous memory policy: 0 00:09:21.339 EAL: request: mp_malloc_sync 00:09:21.339 EAL: No shared files mode enabled, IPC is disabled 00:09:21.339 EAL: Heap on socket 0 was expanded by 2MB 00:09:21.339 EAL: No shared files mode enabled, IPC is disabled 00:09:21.339 EAL: Mem event callback 'spdk:(nil)' registered 00:09:21.339 00:09:21.339 00:09:21.339 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.339 http://cunit.sourceforge.net/ 00:09:21.339 00:09:21.339 00:09:21.339 Suite: components_suite 00:09:21.907 Test: vtophys_malloc_test ...passed 00:09:21.907 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:21.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.907 EAL: Restoring previous memory policy: 0 00:09:21.907 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.907 EAL: request: mp_malloc_sync 00:09:21.907 EAL: No shared files mode enabled, IPC is disabled 00:09:21.907 EAL: Heap on socket 0 was expanded by 4MB 00:09:21.907 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.907 EAL: request: mp_malloc_sync 00:09:21.907 EAL: No shared files mode enabled, IPC is disabled 00:09:21.907 EAL: Heap on socket 0 was shrunk by 4MB 00:09:21.907 EAL: Trying to obtain current memory policy. 00:09:21.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.907 EAL: Restoring previous memory policy: 0 00:09:21.907 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.907 EAL: request: mp_malloc_sync 00:09:21.907 EAL: No shared files mode enabled, IPC is disabled 00:09:21.907 EAL: Heap on socket 0 was expanded by 6MB 00:09:21.907 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.907 EAL: request: mp_malloc_sync 00:09:21.907 EAL: No shared files mode enabled, IPC is disabled 00:09:21.907 EAL: Heap on socket 0 was shrunk by 6MB 00:09:21.907 EAL: Trying to obtain current memory policy. 00:09:21.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.907 EAL: Restoring previous memory policy: 0 00:09:21.907 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.907 EAL: request: mp_malloc_sync 00:09:21.907 EAL: No shared files mode enabled, IPC is disabled 00:09:21.907 EAL: Heap on socket 0 was expanded by 10MB 00:09:21.907 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.907 EAL: request: mp_malloc_sync 00:09:21.907 EAL: No shared files mode enabled, IPC is disabled 00:09:21.907 EAL: Heap on socket 0 was shrunk by 10MB 00:09:21.907 EAL: Trying to obtain current memory policy. 00:09:21.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.907 EAL: Restoring previous memory policy: 0 00:09:21.907 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.907 EAL: request: mp_malloc_sync 00:09:21.907 EAL: No shared files mode enabled, IPC is disabled 00:09:21.907 EAL: Heap on socket 0 was expanded by 18MB 00:09:22.166 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.166 EAL: request: mp_malloc_sync 00:09:22.166 EAL: No shared files mode enabled, IPC is disabled 00:09:22.166 EAL: Heap on socket 0 was shrunk by 18MB 00:09:22.166 EAL: Trying to obtain current memory policy. 00:09:22.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.166 EAL: Restoring previous memory policy: 0 00:09:22.166 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.166 EAL: request: mp_malloc_sync 00:09:22.166 EAL: No shared files mode enabled, IPC is disabled 00:09:22.166 EAL: Heap on socket 0 was expanded by 34MB 00:09:22.166 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.166 EAL: request: mp_malloc_sync 00:09:22.166 EAL: No shared files mode enabled, IPC is disabled 00:09:22.166 EAL: Heap on socket 0 was shrunk by 34MB 00:09:22.166 EAL: Trying to obtain current memory policy. 00:09:22.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.166 EAL: Restoring previous memory policy: 0 00:09:22.166 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.166 EAL: request: mp_malloc_sync 00:09:22.166 EAL: No shared files mode enabled, IPC is disabled 00:09:22.166 EAL: Heap on socket 0 was expanded by 66MB 00:09:22.425 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.425 EAL: request: mp_malloc_sync 00:09:22.425 EAL: No shared files mode enabled, IPC is disabled 00:09:22.425 EAL: Heap on socket 0 was shrunk by 66MB 00:09:22.425 EAL: Trying to obtain current memory policy. 00:09:22.425 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.425 EAL: Restoring previous memory policy: 0 00:09:22.425 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.425 EAL: request: mp_malloc_sync 00:09:22.425 EAL: No shared files mode enabled, IPC is disabled 00:09:22.425 EAL: Heap on socket 0 was expanded by 130MB 00:09:22.684 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.684 EAL: request: mp_malloc_sync 00:09:22.684 EAL: No shared files mode enabled, IPC is disabled 00:09:22.684 EAL: Heap on socket 0 was shrunk by 130MB 00:09:22.943 EAL: Trying to obtain current memory policy. 00:09:22.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.943 EAL: Restoring previous memory policy: 0 00:09:22.943 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.943 EAL: request: mp_malloc_sync 00:09:22.943 EAL: No shared files mode enabled, IPC is disabled 00:09:22.943 EAL: Heap on socket 0 was expanded by 258MB 00:09:23.511 EAL: Calling mem event callback 'spdk:(nil)' 00:09:23.511 EAL: request: mp_malloc_sync 00:09:23.511 EAL: No shared files mode enabled, IPC is disabled 00:09:23.511 EAL: Heap on socket 0 was shrunk by 258MB 00:09:24.078 EAL: Trying to obtain current memory policy. 00:09:24.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:24.078 EAL: Restoring previous memory policy: 0 00:09:24.078 EAL: Calling mem event callback 'spdk:(nil)' 00:09:24.078 EAL: request: mp_malloc_sync 00:09:24.078 EAL: No shared files mode enabled, IPC is disabled 00:09:24.078 EAL: Heap on socket 0 was expanded by 514MB 00:09:25.014 EAL: Calling mem event callback 'spdk:(nil)' 00:09:25.014 EAL: request: mp_malloc_sync 00:09:25.014 EAL: No shared files mode enabled, IPC is disabled 00:09:25.014 EAL: Heap on socket 0 was shrunk by 514MB 00:09:26.390 EAL: Trying to obtain current memory policy. 00:09:26.390 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:26.390 EAL: Restoring previous memory policy: 0 00:09:26.390 EAL: Calling mem event callback 'spdk:(nil)' 00:09:26.390 EAL: request: mp_malloc_sync 00:09:26.390 EAL: No shared files mode enabled, IPC is disabled 00:09:26.390 EAL: Heap on socket 0 was expanded by 1026MB 00:09:28.293 EAL: Calling mem event callback 'spdk:(nil)' 00:09:28.293 EAL: request: mp_malloc_sync 00:09:28.293 EAL: No shared files mode enabled, IPC is disabled 00:09:28.293 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:30.196 passed 00:09:30.196 00:09:30.196 Run Summary: Type Total Ran Passed Failed Inactive 00:09:30.196 suites 1 1 n/a 0 0 00:09:30.196 tests 2 2 2 0 0 00:09:30.196 asserts 6356 6356 6356 0 n/a 00:09:30.196 00:09:30.196 Elapsed time = 8.732 seconds 00:09:30.196 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.196 EAL: request: mp_malloc_sync 00:09:30.196 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 EAL: Heap on socket 0 was shrunk by 2MB 00:09:30.196 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 00:09:30.196 real 0m9.077s 00:09:30.196 user 0m8.077s 00:09:30.196 sys 0m0.853s 00:09:30.196 00:35:52 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.196 00:35:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:30.196 ************************************ 00:09:30.196 END TEST env_vtophys 00:09:30.196 ************************************ 00:09:30.455 00:35:52 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:30.455 00:35:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:30.455 00:35:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.455 00:35:52 env -- common/autotest_common.sh@10 -- # set +x 00:09:30.455 ************************************ 00:09:30.455 START TEST env_pci 00:09:30.455 ************************************ 00:09:30.455 00:35:52 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:30.455 00:09:30.455 00:09:30.455 CUnit - A unit testing framework for C - Version 2.1-3 00:09:30.455 http://cunit.sourceforge.net/ 00:09:30.455 00:09:30.455 00:09:30.455 Suite: pci 00:09:30.455 Test: pci_hook ...[2024-07-25 00:35:52.922974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 112010 has claimed it 00:09:30.455 EAL: Cannot find device (10000:00:01.0) 00:09:30.455 EAL: Failed to attach device on primary process 00:09:30.455 passed 00:09:30.455 00:09:30.455 Run Summary: Type Total Ran Passed Failed Inactive 00:09:30.455 suites 1 1 n/a 0 0 00:09:30.455 tests 1 1 1 0 0 00:09:30.455 asserts 25 25 25 0 n/a 00:09:30.455 00:09:30.455 Elapsed time = 0.004 seconds 00:09:30.455 00:09:30.455 real 0m0.097s 00:09:30.455 user 0m0.050s 00:09:30.455 sys 0m0.047s 00:09:30.455 00:35:52 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.455 00:35:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:30.455 ************************************ 00:09:30.455 END TEST env_pci 00:09:30.455 ************************************ 00:09:30.455 00:35:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:30.455 00:35:53 env -- env/env.sh@15 -- # uname 00:09:30.455 00:35:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:30.455 00:35:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:30.455 00:35:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:30.455 00:35:53 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:30.455 00:35:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.455 00:35:53 env -- common/autotest_common.sh@10 -- # set +x 00:09:30.455 ************************************ 00:09:30.455 START TEST env_dpdk_post_init 00:09:30.455 ************************************ 00:09:30.455 00:35:53 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:30.714 EAL: Detected CPU lcores: 10 00:09:30.714 EAL: Detected NUMA nodes: 1 00:09:30.714 EAL: Detected static linkage of DPDK 00:09:30.714 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:30.714 EAL: Selected IOVA mode 'PA' 00:09:30.714 EAL: VFIO support initialized 00:09:30.714 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:30.714 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:30.714 Starting DPDK initialization... 00:09:30.714 Starting SPDK post initialization... 00:09:30.714 SPDK NVMe probe 00:09:30.714 Attaching to 0000:00:10.0 00:09:30.714 Attached to 0000:00:10.0 00:09:30.714 Cleaning up... 00:09:30.973 00:09:30.973 real 0m0.325s 00:09:30.973 user 0m0.102s 00:09:30.973 sys 0m0.124s 00:09:30.973 00:35:53 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.973 00:35:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:30.973 ************************************ 00:09:30.973 END TEST env_dpdk_post_init 00:09:30.973 ************************************ 00:09:30.973 00:35:53 env -- env/env.sh@26 -- # uname 00:09:30.973 00:35:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:30.973 00:35:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:30.973 00:35:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:30.973 00:35:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.973 00:35:53 env -- common/autotest_common.sh@10 -- # set +x 00:09:30.973 ************************************ 00:09:30.973 START TEST env_mem_callbacks 00:09:30.973 ************************************ 00:09:30.973 00:35:53 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:30.973 EAL: Detected CPU lcores: 10 00:09:30.973 EAL: Detected NUMA nodes: 1 00:09:30.973 EAL: Detected static linkage of DPDK 00:09:30.973 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:30.973 EAL: Selected IOVA mode 'PA' 00:09:30.973 EAL: VFIO support initialized 00:09:31.232 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:31.232 00:09:31.232 00:09:31.232 CUnit - A unit testing framework for C - Version 2.1-3 00:09:31.232 http://cunit.sourceforge.net/ 00:09:31.232 00:09:31.232 00:09:31.232 Suite: memory 00:09:31.232 Test: test ... 00:09:31.232 register 0x200000200000 2097152 00:09:31.232 malloc 3145728 00:09:31.232 register 0x200000400000 4194304 00:09:31.232 buf 0x2000004fffc0 len 3145728 PASSED 00:09:31.232 malloc 64 00:09:31.232 buf 0x2000004ffec0 len 64 PASSED 00:09:31.232 malloc 4194304 00:09:31.232 register 0x200000800000 6291456 00:09:31.232 buf 0x2000009fffc0 len 4194304 PASSED 00:09:31.232 free 0x2000004fffc0 3145728 00:09:31.232 free 0x2000004ffec0 64 00:09:31.232 unregister 0x200000400000 4194304 PASSED 00:09:31.232 free 0x2000009fffc0 4194304 00:09:31.232 unregister 0x200000800000 6291456 PASSED 00:09:31.232 malloc 8388608 00:09:31.232 register 0x200000400000 10485760 00:09:31.232 buf 0x2000005fffc0 len 8388608 PASSED 00:09:31.232 free 0x2000005fffc0 8388608 00:09:31.232 unregister 0x200000400000 10485760 PASSED 00:09:31.232 passed 00:09:31.232 00:09:31.232 Run Summary: Type Total Ran Passed Failed Inactive 00:09:31.232 suites 1 1 n/a 0 0 00:09:31.232 tests 1 1 1 0 0 00:09:31.232 asserts 15 15 15 0 n/a 00:09:31.232 00:09:31.232 Elapsed time = 0.102 seconds 00:09:31.232 00:09:31.232 real 0m0.361s 00:09:31.232 user 0m0.167s 00:09:31.232 sys 0m0.093s 00:09:31.232 00:35:53 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.232 00:35:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:31.232 ************************************ 00:09:31.232 END TEST env_mem_callbacks 00:09:31.232 ************************************ 00:09:31.232 00:09:31.232 real 0m10.651s 00:09:31.232 user 0m8.951s 00:09:31.232 sys 0m1.368s 00:09:31.232 00:35:53 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.232 00:35:53 env -- common/autotest_common.sh@10 -- # set +x 00:09:31.232 ************************************ 00:09:31.232 END TEST env 00:09:31.232 ************************************ 00:09:31.491 00:35:53 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:31.491 00:35:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:31.491 00:35:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.491 00:35:53 -- common/autotest_common.sh@10 -- # set +x 00:09:31.491 ************************************ 00:09:31.491 START TEST rpc 00:09:31.491 ************************************ 00:09:31.491 00:35:53 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:31.491 * Looking for test storage... 00:09:31.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:31.491 00:35:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=112142 00:09:31.491 00:35:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:31.491 00:35:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 112142 00:09:31.491 00:35:54 rpc -- common/autotest_common.sh@829 -- # '[' -z 112142 ']' 00:09:31.491 00:35:54 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:31.491 00:35:54 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.491 00:35:54 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.491 00:35:54 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.491 00:35:54 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.491 00:35:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.491 [2024-07-25 00:35:54.142179] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:09:31.491 [2024-07-25 00:35:54.142610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112142 ] 00:09:31.750 [2024-07-25 00:35:54.324782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.009 [2024-07-25 00:35:54.578478] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:32.009 [2024-07-25 00:35:54.578958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 112142' to capture a snapshot of events at runtime. 00:09:32.009 [2024-07-25 00:35:54.579176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.009 [2024-07-25 00:35:54.579354] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.009 [2024-07-25 00:35:54.579503] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid112142 for offline analysis/debug. 00:09:32.009 [2024-07-25 00:35:54.579741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.945 00:35:55 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.945 00:35:55 rpc -- common/autotest_common.sh@862 -- # return 0 00:09:32.945 00:35:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:32.945 00:35:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:32.945 00:35:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:32.945 00:35:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:32.945 00:35:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:32.945 00:35:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.945 00:35:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.945 ************************************ 00:09:32.945 START TEST rpc_integrity 00:09:32.945 ************************************ 00:09:32.945 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:09:32.945 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:32.945 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.945 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:32.945 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.945 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:32.945 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:32.945 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:32.945 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:32.945 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.945 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:32.945 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.945 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:32.945 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:32.945 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.945 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:32.945 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.945 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:32.945 { 00:09:32.945 "name": "Malloc0", 00:09:32.945 "aliases": [ 00:09:32.945 "2df2e609-4e41-4c46-9f07-d0f8c01df43a" 00:09:32.945 ], 00:09:32.945 "product_name": "Malloc disk", 00:09:32.945 "block_size": 512, 00:09:32.945 "num_blocks": 16384, 00:09:32.945 "uuid": "2df2e609-4e41-4c46-9f07-d0f8c01df43a", 00:09:32.945 "assigned_rate_limits": { 00:09:32.945 "rw_ios_per_sec": 0, 00:09:32.945 "rw_mbytes_per_sec": 0, 00:09:32.945 "r_mbytes_per_sec": 0, 00:09:32.945 "w_mbytes_per_sec": 0 00:09:32.945 }, 00:09:32.945 "claimed": false, 00:09:32.945 "zoned": false, 00:09:32.945 "supported_io_types": { 00:09:32.945 "read": true, 00:09:32.945 "write": true, 00:09:32.945 "unmap": true, 00:09:32.945 "flush": true, 00:09:32.945 "reset": true, 00:09:32.945 "nvme_admin": false, 00:09:32.945 "nvme_io": false, 00:09:32.945 "nvme_io_md": false, 00:09:32.945 "write_zeroes": true, 00:09:32.945 "zcopy": true, 00:09:32.945 "get_zone_info": false, 00:09:32.945 "zone_management": false, 00:09:32.945 "zone_append": false, 00:09:32.945 "compare": false, 00:09:32.945 "compare_and_write": false, 00:09:32.945 "abort": true, 00:09:32.945 "seek_hole": false, 00:09:32.945 "seek_data": false, 00:09:32.945 "copy": true, 00:09:32.945 "nvme_iov_md": false 00:09:32.945 }, 00:09:32.945 "memory_domains": [ 00:09:32.945 { 00:09:32.945 "dma_device_id": "system", 00:09:32.945 "dma_device_type": 1 00:09:32.945 }, 00:09:32.945 { 00:09:32.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.945 "dma_device_type": 2 00:09:32.945 } 00:09:32.945 ], 00:09:32.945 "driver_specific": {} 00:09:32.945 } 00:09:32.945 ]' 00:09:32.945 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:33.204 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:33.204 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:33.204 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.204 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.204 [2024-07-25 00:35:55.628431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:33.204 [2024-07-25 00:35:55.628525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.204 [2024-07-25 00:35:55.628600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:33.204 [2024-07-25 00:35:55.628636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.204 [2024-07-25 00:35:55.631345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.204 [2024-07-25 00:35:55.631408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:33.204 Passthru0 00:09:33.204 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.204 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:33.204 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.204 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.204 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.204 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:33.204 { 00:09:33.204 "name": "Malloc0", 00:09:33.204 "aliases": [ 00:09:33.204 "2df2e609-4e41-4c46-9f07-d0f8c01df43a" 00:09:33.204 ], 00:09:33.204 "product_name": "Malloc disk", 00:09:33.204 "block_size": 512, 00:09:33.204 "num_blocks": 16384, 00:09:33.204 "uuid": "2df2e609-4e41-4c46-9f07-d0f8c01df43a", 00:09:33.204 "assigned_rate_limits": { 00:09:33.204 "rw_ios_per_sec": 0, 00:09:33.204 "rw_mbytes_per_sec": 0, 00:09:33.204 "r_mbytes_per_sec": 0, 00:09:33.204 "w_mbytes_per_sec": 0 00:09:33.204 }, 00:09:33.204 "claimed": true, 00:09:33.204 "claim_type": "exclusive_write", 00:09:33.204 "zoned": false, 00:09:33.204 "supported_io_types": { 00:09:33.204 "read": true, 00:09:33.204 "write": true, 00:09:33.204 "unmap": true, 00:09:33.204 "flush": true, 00:09:33.205 "reset": true, 00:09:33.205 "nvme_admin": false, 00:09:33.205 "nvme_io": false, 00:09:33.205 "nvme_io_md": false, 00:09:33.205 "write_zeroes": true, 00:09:33.205 "zcopy": true, 00:09:33.205 "get_zone_info": false, 00:09:33.205 "zone_management": false, 00:09:33.205 "zone_append": false, 00:09:33.205 "compare": false, 00:09:33.205 "compare_and_write": false, 00:09:33.205 "abort": true, 00:09:33.205 "seek_hole": false, 00:09:33.205 "seek_data": false, 00:09:33.205 "copy": true, 00:09:33.205 "nvme_iov_md": false 00:09:33.205 }, 00:09:33.205 "memory_domains": [ 00:09:33.205 { 00:09:33.205 "dma_device_id": "system", 00:09:33.205 "dma_device_type": 1 00:09:33.205 }, 00:09:33.205 { 00:09:33.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.205 "dma_device_type": 2 00:09:33.205 } 00:09:33.205 ], 00:09:33.205 "driver_specific": {} 00:09:33.205 }, 00:09:33.205 { 00:09:33.205 "name": "Passthru0", 00:09:33.205 "aliases": [ 00:09:33.205 "a7ee5cf3-c461-5258-af61-a6cae1549889" 00:09:33.205 ], 00:09:33.205 "product_name": "passthru", 00:09:33.205 "block_size": 512, 00:09:33.205 "num_blocks": 16384, 00:09:33.205 "uuid": "a7ee5cf3-c461-5258-af61-a6cae1549889", 00:09:33.205 "assigned_rate_limits": { 00:09:33.205 "rw_ios_per_sec": 0, 00:09:33.205 "rw_mbytes_per_sec": 0, 00:09:33.205 "r_mbytes_per_sec": 0, 00:09:33.205 "w_mbytes_per_sec": 0 00:09:33.205 }, 00:09:33.205 "claimed": false, 00:09:33.205 "zoned": false, 00:09:33.205 "supported_io_types": { 00:09:33.205 "read": true, 00:09:33.205 "write": true, 00:09:33.205 "unmap": true, 00:09:33.205 "flush": true, 00:09:33.205 "reset": true, 00:09:33.205 "nvme_admin": false, 00:09:33.205 "nvme_io": false, 00:09:33.205 "nvme_io_md": false, 00:09:33.205 "write_zeroes": true, 00:09:33.205 "zcopy": true, 00:09:33.205 "get_zone_info": false, 00:09:33.205 "zone_management": false, 00:09:33.205 "zone_append": false, 00:09:33.205 "compare": false, 00:09:33.205 "compare_and_write": false, 00:09:33.205 "abort": true, 00:09:33.205 "seek_hole": false, 00:09:33.205 "seek_data": false, 00:09:33.205 "copy": true, 00:09:33.205 "nvme_iov_md": false 00:09:33.205 }, 00:09:33.205 "memory_domains": [ 00:09:33.205 { 00:09:33.205 "dma_device_id": "system", 00:09:33.205 "dma_device_type": 1 00:09:33.205 }, 00:09:33.205 { 00:09:33.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.205 "dma_device_type": 2 00:09:33.205 } 00:09:33.205 ], 00:09:33.205 "driver_specific": { 00:09:33.205 "passthru": { 00:09:33.205 "name": "Passthru0", 00:09:33.205 "base_bdev_name": "Malloc0" 00:09:33.205 } 00:09:33.205 } 00:09:33.205 } 00:09:33.205 ]' 00:09:33.205 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:33.205 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:33.205 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:33.205 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.205 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.205 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.205 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:33.205 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.205 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.205 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.205 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:33.205 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.205 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.205 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.205 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:33.205 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:33.205 00:35:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:33.205 00:09:33.205 real 0m0.300s 00:09:33.205 user 0m0.172s 00:09:33.205 sys 0m0.047s 00:09:33.205 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.205 00:35:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.205 ************************************ 00:09:33.205 END TEST rpc_integrity 00:09:33.205 ************************************ 00:09:33.205 00:35:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:33.205 00:35:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:33.205 00:35:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.205 00:35:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.205 ************************************ 00:09:33.205 START TEST rpc_plugins 00:09:33.205 ************************************ 00:09:33.205 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:09:33.205 00:35:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:33.205 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.205 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:33.464 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.464 00:35:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:33.464 00:35:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:33.464 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.464 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:33.464 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.464 00:35:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:33.464 { 00:09:33.464 "name": "Malloc1", 00:09:33.464 "aliases": [ 00:09:33.464 "52fcd0d8-9714-4698-a087-efc2150f321a" 00:09:33.464 ], 00:09:33.464 "product_name": "Malloc disk", 00:09:33.464 "block_size": 4096, 00:09:33.464 "num_blocks": 256, 00:09:33.464 "uuid": "52fcd0d8-9714-4698-a087-efc2150f321a", 00:09:33.464 "assigned_rate_limits": { 00:09:33.464 "rw_ios_per_sec": 0, 00:09:33.464 "rw_mbytes_per_sec": 0, 00:09:33.464 "r_mbytes_per_sec": 0, 00:09:33.464 "w_mbytes_per_sec": 0 00:09:33.464 }, 00:09:33.464 "claimed": false, 00:09:33.464 "zoned": false, 00:09:33.464 "supported_io_types": { 00:09:33.464 "read": true, 00:09:33.464 "write": true, 00:09:33.464 "unmap": true, 00:09:33.464 "flush": true, 00:09:33.464 "reset": true, 00:09:33.464 "nvme_admin": false, 00:09:33.464 "nvme_io": false, 00:09:33.464 "nvme_io_md": false, 00:09:33.464 "write_zeroes": true, 00:09:33.464 "zcopy": true, 00:09:33.464 "get_zone_info": false, 00:09:33.464 "zone_management": false, 00:09:33.464 "zone_append": false, 00:09:33.464 "compare": false, 00:09:33.464 "compare_and_write": false, 00:09:33.464 "abort": true, 00:09:33.464 "seek_hole": false, 00:09:33.464 "seek_data": false, 00:09:33.464 "copy": true, 00:09:33.464 "nvme_iov_md": false 00:09:33.464 }, 00:09:33.464 "memory_domains": [ 00:09:33.464 { 00:09:33.464 "dma_device_id": "system", 00:09:33.464 "dma_device_type": 1 00:09:33.464 }, 00:09:33.464 { 00:09:33.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.464 "dma_device_type": 2 00:09:33.464 } 00:09:33.464 ], 00:09:33.464 "driver_specific": {} 00:09:33.464 } 00:09:33.464 ]' 00:09:33.464 00:35:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:33.464 00:35:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:33.464 00:35:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:33.464 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.464 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:33.464 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.464 00:35:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:33.464 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.464 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:33.464 00:35:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.464 00:35:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:33.464 00:35:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:33.464 00:35:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:33.464 00:09:33.464 real 0m0.157s 00:09:33.464 user 0m0.088s 00:09:33.464 sys 0m0.031s 00:09:33.464 00:35:56 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.464 00:35:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:33.464 ************************************ 00:09:33.464 END TEST rpc_plugins 00:09:33.464 ************************************ 00:09:33.464 00:35:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:33.464 00:35:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:33.464 00:35:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.464 00:35:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.464 ************************************ 00:09:33.464 START TEST rpc_trace_cmd_test 00:09:33.464 ************************************ 00:09:33.464 00:35:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:09:33.464 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:33.464 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:33.464 00:35:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.464 00:35:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.464 00:35:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.464 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:33.464 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid112142", 00:09:33.464 "tpoint_group_mask": "0x8", 00:09:33.464 "iscsi_conn": { 00:09:33.464 "mask": "0x2", 00:09:33.464 "tpoint_mask": "0x0" 00:09:33.464 }, 00:09:33.464 "scsi": { 00:09:33.464 "mask": "0x4", 00:09:33.464 "tpoint_mask": "0x0" 00:09:33.464 }, 00:09:33.464 "bdev": { 00:09:33.464 "mask": "0x8", 00:09:33.464 "tpoint_mask": "0xffffffffffffffff" 00:09:33.464 }, 00:09:33.464 "nvmf_rdma": { 00:09:33.464 "mask": "0x10", 00:09:33.464 "tpoint_mask": "0x0" 00:09:33.464 }, 00:09:33.464 "nvmf_tcp": { 00:09:33.464 "mask": "0x20", 00:09:33.464 "tpoint_mask": "0x0" 00:09:33.464 }, 00:09:33.464 "ftl": { 00:09:33.464 "mask": "0x40", 00:09:33.464 "tpoint_mask": "0x0" 00:09:33.464 }, 00:09:33.464 "blobfs": { 00:09:33.465 "mask": "0x80", 00:09:33.465 "tpoint_mask": "0x0" 00:09:33.465 }, 00:09:33.465 "dsa": { 00:09:33.465 "mask": "0x200", 00:09:33.465 "tpoint_mask": "0x0" 00:09:33.465 }, 00:09:33.465 "thread": { 00:09:33.465 "mask": "0x400", 00:09:33.465 "tpoint_mask": "0x0" 00:09:33.465 }, 00:09:33.465 "nvme_pcie": { 00:09:33.465 "mask": "0x800", 00:09:33.465 "tpoint_mask": "0x0" 00:09:33.465 }, 00:09:33.465 "iaa": { 00:09:33.465 "mask": "0x1000", 00:09:33.465 "tpoint_mask": "0x0" 00:09:33.465 }, 00:09:33.465 "nvme_tcp": { 00:09:33.465 "mask": "0x2000", 00:09:33.465 "tpoint_mask": "0x0" 00:09:33.465 }, 00:09:33.465 "bdev_nvme": { 00:09:33.465 "mask": "0x4000", 00:09:33.465 "tpoint_mask": "0x0" 00:09:33.465 }, 00:09:33.465 "sock": { 00:09:33.465 "mask": "0x8000", 00:09:33.465 "tpoint_mask": "0x0" 00:09:33.465 } 00:09:33.465 }' 00:09:33.465 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:33.723 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:33.723 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:33.723 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:33.723 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:33.723 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:33.723 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:33.723 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:33.723 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:33.723 00:35:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:33.723 00:09:33.723 real 0m0.251s 00:09:33.723 user 0m0.220s 00:09:33.723 sys 0m0.024s 00:09:33.723 00:35:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.723 00:35:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.723 ************************************ 00:09:33.723 END TEST rpc_trace_cmd_test 00:09:33.723 ************************************ 00:09:33.983 00:35:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:33.983 00:35:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:33.983 00:35:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:33.983 00:35:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:33.983 00:35:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.983 00:35:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.983 ************************************ 00:09:33.983 START TEST rpc_daemon_integrity 00:09:33.983 ************************************ 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:33.983 { 00:09:33.983 "name": "Malloc2", 00:09:33.983 "aliases": [ 00:09:33.983 "5af93504-93e3-4077-bf24-055e7390e7a1" 00:09:33.983 ], 00:09:33.983 "product_name": "Malloc disk", 00:09:33.983 "block_size": 512, 00:09:33.983 "num_blocks": 16384, 00:09:33.983 "uuid": "5af93504-93e3-4077-bf24-055e7390e7a1", 00:09:33.983 "assigned_rate_limits": { 00:09:33.983 "rw_ios_per_sec": 0, 00:09:33.983 "rw_mbytes_per_sec": 0, 00:09:33.983 "r_mbytes_per_sec": 0, 00:09:33.983 "w_mbytes_per_sec": 0 00:09:33.983 }, 00:09:33.983 "claimed": false, 00:09:33.983 "zoned": false, 00:09:33.983 "supported_io_types": { 00:09:33.983 "read": true, 00:09:33.983 "write": true, 00:09:33.983 "unmap": true, 00:09:33.983 "flush": true, 00:09:33.983 "reset": true, 00:09:33.983 "nvme_admin": false, 00:09:33.983 "nvme_io": false, 00:09:33.983 "nvme_io_md": false, 00:09:33.983 "write_zeroes": true, 00:09:33.983 "zcopy": true, 00:09:33.983 "get_zone_info": false, 00:09:33.983 "zone_management": false, 00:09:33.983 "zone_append": false, 00:09:33.983 "compare": false, 00:09:33.983 "compare_and_write": false, 00:09:33.983 "abort": true, 00:09:33.983 "seek_hole": false, 00:09:33.983 "seek_data": false, 00:09:33.983 "copy": true, 00:09:33.983 "nvme_iov_md": false 00:09:33.983 }, 00:09:33.983 "memory_domains": [ 00:09:33.983 { 00:09:33.983 "dma_device_id": "system", 00:09:33.983 "dma_device_type": 1 00:09:33.983 }, 00:09:33.983 { 00:09:33.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.983 "dma_device_type": 2 00:09:33.983 } 00:09:33.983 ], 00:09:33.983 "driver_specific": {} 00:09:33.983 } 00:09:33.983 ]' 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.983 [2024-07-25 00:35:56.536358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:33.983 [2024-07-25 00:35:56.536566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.983 [2024-07-25 00:35:56.536663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:33.983 [2024-07-25 00:35:56.536754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.983 [2024-07-25 00:35:56.539260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.983 [2024-07-25 00:35:56.539427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:33.983 Passthru0 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.983 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:33.983 { 00:09:33.983 "name": "Malloc2", 00:09:33.983 "aliases": [ 00:09:33.983 "5af93504-93e3-4077-bf24-055e7390e7a1" 00:09:33.983 ], 00:09:33.983 "product_name": "Malloc disk", 00:09:33.983 "block_size": 512, 00:09:33.983 "num_blocks": 16384, 00:09:33.983 "uuid": "5af93504-93e3-4077-bf24-055e7390e7a1", 00:09:33.983 "assigned_rate_limits": { 00:09:33.983 "rw_ios_per_sec": 0, 00:09:33.983 "rw_mbytes_per_sec": 0, 00:09:33.983 "r_mbytes_per_sec": 0, 00:09:33.983 "w_mbytes_per_sec": 0 00:09:33.983 }, 00:09:33.983 "claimed": true, 00:09:33.983 "claim_type": "exclusive_write", 00:09:33.983 "zoned": false, 00:09:33.983 "supported_io_types": { 00:09:33.983 "read": true, 00:09:33.983 "write": true, 00:09:33.983 "unmap": true, 00:09:33.983 "flush": true, 00:09:33.983 "reset": true, 00:09:33.983 "nvme_admin": false, 00:09:33.983 "nvme_io": false, 00:09:33.983 "nvme_io_md": false, 00:09:33.983 "write_zeroes": true, 00:09:33.983 "zcopy": true, 00:09:33.983 "get_zone_info": false, 00:09:33.983 "zone_management": false, 00:09:33.983 "zone_append": false, 00:09:33.983 "compare": false, 00:09:33.983 "compare_and_write": false, 00:09:33.983 "abort": true, 00:09:33.983 "seek_hole": false, 00:09:33.983 "seek_data": false, 00:09:33.983 "copy": true, 00:09:33.983 "nvme_iov_md": false 00:09:33.983 }, 00:09:33.983 "memory_domains": [ 00:09:33.983 { 00:09:33.983 "dma_device_id": "system", 00:09:33.983 "dma_device_type": 1 00:09:33.983 }, 00:09:33.983 { 00:09:33.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.983 "dma_device_type": 2 00:09:33.983 } 00:09:33.983 ], 00:09:33.983 "driver_specific": {} 00:09:33.983 }, 00:09:33.983 { 00:09:33.983 "name": "Passthru0", 00:09:33.983 "aliases": [ 00:09:33.984 "f3ef3649-028f-5c81-9bda-5cf9e0ccb7e0" 00:09:33.984 ], 00:09:33.984 "product_name": "passthru", 00:09:33.984 "block_size": 512, 00:09:33.984 "num_blocks": 16384, 00:09:33.984 "uuid": "f3ef3649-028f-5c81-9bda-5cf9e0ccb7e0", 00:09:33.984 "assigned_rate_limits": { 00:09:33.984 "rw_ios_per_sec": 0, 00:09:33.984 "rw_mbytes_per_sec": 0, 00:09:33.984 "r_mbytes_per_sec": 0, 00:09:33.984 "w_mbytes_per_sec": 0 00:09:33.984 }, 00:09:33.984 "claimed": false, 00:09:33.984 "zoned": false, 00:09:33.984 "supported_io_types": { 00:09:33.984 "read": true, 00:09:33.984 "write": true, 00:09:33.984 "unmap": true, 00:09:33.984 "flush": true, 00:09:33.984 "reset": true, 00:09:33.984 "nvme_admin": false, 00:09:33.984 "nvme_io": false, 00:09:33.984 "nvme_io_md": false, 00:09:33.984 "write_zeroes": true, 00:09:33.984 "zcopy": true, 00:09:33.984 "get_zone_info": false, 00:09:33.984 "zone_management": false, 00:09:33.984 "zone_append": false, 00:09:33.984 "compare": false, 00:09:33.984 "compare_and_write": false, 00:09:33.984 "abort": true, 00:09:33.984 "seek_hole": false, 00:09:33.984 "seek_data": false, 00:09:33.984 "copy": true, 00:09:33.984 "nvme_iov_md": false 00:09:33.984 }, 00:09:33.984 "memory_domains": [ 00:09:33.984 { 00:09:33.984 "dma_device_id": "system", 00:09:33.984 "dma_device_type": 1 00:09:33.984 }, 00:09:33.984 { 00:09:33.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.984 "dma_device_type": 2 00:09:33.984 } 00:09:33.984 ], 00:09:33.984 "driver_specific": { 00:09:33.984 "passthru": { 00:09:33.984 "name": "Passthru0", 00:09:33.984 "base_bdev_name": "Malloc2" 00:09:33.984 } 00:09:33.984 } 00:09:33.984 } 00:09:33.984 ]' 00:09:33.984 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:33.984 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:33.984 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:33.984 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.984 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:33.984 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.984 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:33.984 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.984 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:34.242 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.242 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:34.242 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.242 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:34.242 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.242 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:34.242 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:34.242 00:35:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:34.242 00:09:34.242 real 0m0.306s 00:09:34.242 user 0m0.159s 00:09:34.242 sys 0m0.059s 00:09:34.242 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.242 00:35:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:34.242 ************************************ 00:09:34.242 END TEST rpc_daemon_integrity 00:09:34.242 ************************************ 00:09:34.242 00:35:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:34.242 00:35:56 rpc -- rpc/rpc.sh@84 -- # killprocess 112142 00:09:34.242 00:35:56 rpc -- common/autotest_common.sh@948 -- # '[' -z 112142 ']' 00:09:34.242 00:35:56 rpc -- common/autotest_common.sh@952 -- # kill -0 112142 00:09:34.242 00:35:56 rpc -- common/autotest_common.sh@953 -- # uname 00:09:34.242 00:35:56 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.242 00:35:56 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112142 00:09:34.242 00:35:56 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:34.242 00:35:56 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:34.242 00:35:56 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112142' 00:09:34.242 killing process with pid 112142 00:09:34.242 00:35:56 rpc -- common/autotest_common.sh@967 -- # kill 112142 00:09:34.242 00:35:56 rpc -- common/autotest_common.sh@972 -- # wait 112142 00:09:36.774 ************************************ 00:09:36.774 END TEST rpc 00:09:36.774 ************************************ 00:09:36.774 00:09:36.774 real 0m5.330s 00:09:36.774 user 0m5.952s 00:09:36.774 sys 0m0.858s 00:09:36.774 00:35:59 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.774 00:35:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 00:35:59 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:36.774 00:35:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:36.774 00:35:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.774 00:35:59 -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 ************************************ 00:09:36.774 START TEST skip_rpc 00:09:36.774 ************************************ 00:09:36.774 00:35:59 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:36.774 * Looking for test storage... 00:09:36.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:36.774 00:35:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:36.774 00:35:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:36.774 00:35:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:36.774 00:35:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:36.774 00:35:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.774 00:35:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.034 ************************************ 00:09:37.034 START TEST skip_rpc 00:09:37.034 ************************************ 00:09:37.034 00:35:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:09:37.034 00:35:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=112396 00:09:37.034 00:35:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:37.034 00:35:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:37.034 00:35:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:37.034 [2024-07-25 00:35:59.521935] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:09:37.034 [2024-07-25 00:35:59.522894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112396 ] 00:09:37.293 [2024-07-25 00:35:59.704462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.293 [2024-07-25 00:35:59.906572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 112396 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 112396 ']' 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 112396 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112396 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112396' 00:09:42.568 killing process with pid 112396 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 112396 00:09:42.568 00:36:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 112396 00:09:44.473 ************************************ 00:09:44.473 END TEST skip_rpc 00:09:44.473 ************************************ 00:09:44.473 00:09:44.473 real 0m7.538s 00:09:44.473 user 0m7.065s 00:09:44.473 sys 0m0.397s 00:09:44.473 00:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.473 00:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.473 00:36:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:44.473 00:36:07 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:44.473 00:36:07 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.473 00:36:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.473 ************************************ 00:09:44.473 START TEST skip_rpc_with_json 00:09:44.473 ************************************ 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=112522 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 112522 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 112522 ']' 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.473 00:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:44.732 [2024-07-25 00:36:07.128378] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:09:44.732 [2024-07-25 00:36:07.129415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112522 ] 00:09:44.732 [2024-07-25 00:36:07.308266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.991 [2024-07-25 00:36:07.512363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:45.929 [2024-07-25 00:36:08.290158] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:45.929 request: 00:09:45.929 { 00:09:45.929 "trtype": "tcp", 00:09:45.929 "method": "nvmf_get_transports", 00:09:45.929 "req_id": 1 00:09:45.929 } 00:09:45.929 Got JSON-RPC error response 00:09:45.929 response: 00:09:45.929 { 00:09:45.929 "code": -19, 00:09:45.929 "message": "No such device" 00:09:45.929 } 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:45.929 [2024-07-25 00:36:08.302305] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.929 00:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:45.929 { 00:09:45.929 "subsystems": [ 00:09:45.929 { 00:09:45.929 "subsystem": "scheduler", 00:09:45.929 "config": [ 00:09:45.929 { 00:09:45.929 "method": "framework_set_scheduler", 00:09:45.929 "params": { 00:09:45.929 "name": "static" 00:09:45.929 } 00:09:45.929 } 00:09:45.929 ] 00:09:45.929 }, 00:09:45.929 { 00:09:45.929 "subsystem": "vmd", 00:09:45.929 "config": [] 00:09:45.929 }, 00:09:45.929 { 00:09:45.929 "subsystem": "sock", 00:09:45.929 "config": [ 00:09:45.929 { 00:09:45.929 "method": "sock_set_default_impl", 00:09:45.929 "params": { 00:09:45.929 "impl_name": "posix" 00:09:45.929 } 00:09:45.929 }, 00:09:45.929 { 00:09:45.929 "method": "sock_impl_set_options", 00:09:45.929 "params": { 00:09:45.929 "impl_name": "ssl", 00:09:45.929 "recv_buf_size": 4096, 00:09:45.929 "send_buf_size": 4096, 00:09:45.929 "enable_recv_pipe": true, 00:09:45.929 "enable_quickack": false, 00:09:45.929 "enable_placement_id": 0, 00:09:45.929 "enable_zerocopy_send_server": true, 00:09:45.929 "enable_zerocopy_send_client": false, 00:09:45.929 "zerocopy_threshold": 0, 00:09:45.929 "tls_version": 0, 00:09:45.929 "enable_ktls": false 00:09:45.929 } 00:09:45.929 }, 00:09:45.929 { 00:09:45.929 "method": "sock_impl_set_options", 00:09:45.929 "params": { 00:09:45.929 "impl_name": "posix", 00:09:45.929 "recv_buf_size": 2097152, 00:09:45.929 "send_buf_size": 2097152, 00:09:45.929 "enable_recv_pipe": true, 00:09:45.929 "enable_quickack": false, 00:09:45.929 "enable_placement_id": 0, 00:09:45.929 "enable_zerocopy_send_server": true, 00:09:45.929 "enable_zerocopy_send_client": false, 00:09:45.929 "zerocopy_threshold": 0, 00:09:45.929 "tls_version": 0, 00:09:45.929 "enable_ktls": false 00:09:45.929 } 00:09:45.929 } 00:09:45.929 ] 00:09:45.929 }, 00:09:45.929 { 00:09:45.929 "subsystem": "iobuf", 00:09:45.929 "config": [ 00:09:45.929 { 00:09:45.929 "method": "iobuf_set_options", 00:09:45.929 "params": { 00:09:45.929 "small_pool_count": 8192, 00:09:45.929 "large_pool_count": 1024, 00:09:45.929 "small_bufsize": 8192, 00:09:45.929 "large_bufsize": 135168 00:09:45.929 } 00:09:45.929 } 00:09:45.929 ] 00:09:45.929 }, 00:09:45.929 { 00:09:45.929 "subsystem": "keyring", 00:09:45.929 "config": [] 00:09:45.929 }, 00:09:45.929 { 00:09:45.929 "subsystem": "accel", 00:09:45.929 "config": [ 00:09:45.929 { 00:09:45.929 "method": "accel_set_options", 00:09:45.929 "params": { 00:09:45.929 "small_cache_size": 128, 00:09:45.929 "large_cache_size": 16, 00:09:45.929 "task_count": 2048, 00:09:45.929 "sequence_count": 2048, 00:09:45.929 "buf_count": 2048 00:09:45.929 } 00:09:45.929 } 00:09:45.929 ] 00:09:45.929 }, 00:09:45.929 { 00:09:45.929 "subsystem": "bdev", 00:09:45.929 "config": [ 00:09:45.929 { 00:09:45.929 "method": "bdev_set_options", 00:09:45.929 "params": { 00:09:45.929 "bdev_io_pool_size": 65535, 00:09:45.929 "bdev_io_cache_size": 256, 00:09:45.929 "bdev_auto_examine": true, 00:09:45.929 "iobuf_small_cache_size": 128, 00:09:45.929 "iobuf_large_cache_size": 16 00:09:45.929 } 00:09:45.929 }, 00:09:45.929 { 00:09:45.929 "method": "bdev_raid_set_options", 00:09:45.929 "params": { 00:09:45.929 "process_window_size_kb": 1024, 00:09:45.929 "process_max_bandwidth_mb_sec": 0 00:09:45.929 } 00:09:45.929 }, 00:09:45.929 { 00:09:45.929 "method": "bdev_nvme_set_options", 00:09:45.929 "params": { 00:09:45.929 "action_on_timeout": "none", 00:09:45.929 "timeout_us": 0, 00:09:45.929 "timeout_admin_us": 0, 00:09:45.929 "keep_alive_timeout_ms": 10000, 00:09:45.929 "arbitration_burst": 0, 00:09:45.930 "low_priority_weight": 0, 00:09:45.930 "medium_priority_weight": 0, 00:09:45.930 "high_priority_weight": 0, 00:09:45.930 "nvme_adminq_poll_period_us": 10000, 00:09:45.930 "nvme_ioq_poll_period_us": 0, 00:09:45.930 "io_queue_requests": 0, 00:09:45.930 "delay_cmd_submit": true, 00:09:45.930 "transport_retry_count": 4, 00:09:45.930 "bdev_retry_count": 3, 00:09:45.930 "transport_ack_timeout": 0, 00:09:45.930 "ctrlr_loss_timeout_sec": 0, 00:09:45.930 "reconnect_delay_sec": 0, 00:09:45.930 "fast_io_fail_timeout_sec": 0, 00:09:45.930 "disable_auto_failback": false, 00:09:45.930 "generate_uuids": false, 00:09:45.930 "transport_tos": 0, 00:09:45.930 "nvme_error_stat": false, 00:09:45.930 "rdma_srq_size": 0, 00:09:45.930 "io_path_stat": false, 00:09:45.930 "allow_accel_sequence": false, 00:09:45.930 "rdma_max_cq_size": 0, 00:09:45.930 "rdma_cm_event_timeout_ms": 0, 00:09:45.930 "dhchap_digests": [ 00:09:45.930 "sha256", 00:09:45.930 "sha384", 00:09:45.930 "sha512" 00:09:45.930 ], 00:09:45.930 "dhchap_dhgroups": [ 00:09:45.930 "null", 00:09:45.930 "ffdhe2048", 00:09:45.930 "ffdhe3072", 00:09:45.930 "ffdhe4096", 00:09:45.930 "ffdhe6144", 00:09:45.930 "ffdhe8192" 00:09:45.930 ] 00:09:45.930 } 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "method": "bdev_nvme_set_hotplug", 00:09:45.930 "params": { 00:09:45.930 "period_us": 100000, 00:09:45.930 "enable": false 00:09:45.930 } 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "method": "bdev_iscsi_set_options", 00:09:45.930 "params": { 00:09:45.930 "timeout_sec": 30 00:09:45.930 } 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "method": "bdev_wait_for_examine" 00:09:45.930 } 00:09:45.930 ] 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "subsystem": "nvmf", 00:09:45.930 "config": [ 00:09:45.930 { 00:09:45.930 "method": "nvmf_set_config", 00:09:45.930 "params": { 00:09:45.930 "discovery_filter": "match_any", 00:09:45.930 "admin_cmd_passthru": { 00:09:45.930 "identify_ctrlr": false 00:09:45.930 } 00:09:45.930 } 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "method": "nvmf_set_max_subsystems", 00:09:45.930 "params": { 00:09:45.930 "max_subsystems": 1024 00:09:45.930 } 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "method": "nvmf_set_crdt", 00:09:45.930 "params": { 00:09:45.930 "crdt1": 0, 00:09:45.930 "crdt2": 0, 00:09:45.930 "crdt3": 0 00:09:45.930 } 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "method": "nvmf_create_transport", 00:09:45.930 "params": { 00:09:45.930 "trtype": "TCP", 00:09:45.930 "max_queue_depth": 128, 00:09:45.930 "max_io_qpairs_per_ctrlr": 127, 00:09:45.930 "in_capsule_data_size": 4096, 00:09:45.930 "max_io_size": 131072, 00:09:45.930 "io_unit_size": 131072, 00:09:45.930 "max_aq_depth": 128, 00:09:45.930 "num_shared_buffers": 511, 00:09:45.930 "buf_cache_size": 4294967295, 00:09:45.930 "dif_insert_or_strip": false, 00:09:45.930 "zcopy": false, 00:09:45.930 "c2h_success": true, 00:09:45.930 "sock_priority": 0, 00:09:45.930 "abort_timeout_sec": 1, 00:09:45.930 "ack_timeout": 0, 00:09:45.930 "data_wr_pool_size": 0 00:09:45.930 } 00:09:45.930 } 00:09:45.930 ] 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "subsystem": "nbd", 00:09:45.930 "config": [] 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "subsystem": "vhost_blk", 00:09:45.930 "config": [] 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "subsystem": "scsi", 00:09:45.930 "config": null 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "subsystem": "iscsi", 00:09:45.930 "config": [ 00:09:45.930 { 00:09:45.930 "method": "iscsi_set_options", 00:09:45.930 "params": { 00:09:45.930 "node_base": "iqn.2016-06.io.spdk", 00:09:45.930 "max_sessions": 128, 00:09:45.930 "max_connections_per_session": 2, 00:09:45.930 "max_queue_depth": 64, 00:09:45.930 "default_time2wait": 2, 00:09:45.930 "default_time2retain": 20, 00:09:45.930 "first_burst_length": 8192, 00:09:45.930 "immediate_data": true, 00:09:45.930 "allow_duplicated_isid": false, 00:09:45.930 "error_recovery_level": 0, 00:09:45.930 "nop_timeout": 60, 00:09:45.930 "nop_in_interval": 30, 00:09:45.930 "disable_chap": false, 00:09:45.930 "require_chap": false, 00:09:45.930 "mutual_chap": false, 00:09:45.930 "chap_group": 0, 00:09:45.930 "max_large_datain_per_connection": 64, 00:09:45.930 "max_r2t_per_connection": 4, 00:09:45.930 "pdu_pool_size": 36864, 00:09:45.930 "immediate_data_pool_size": 16384, 00:09:45.930 "data_out_pool_size": 2048 00:09:45.930 } 00:09:45.930 } 00:09:45.930 ] 00:09:45.930 }, 00:09:45.930 { 00:09:45.930 "subsystem": "vhost_scsi", 00:09:45.930 "config": [] 00:09:45.930 } 00:09:45.930 ] 00:09:45.930 } 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 112522 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 112522 ']' 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 112522 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112522 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112522' 00:09:45.930 killing process with pid 112522 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 112522 00:09:45.930 00:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 112522 00:09:48.463 00:36:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:48.463 00:36:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=112586 00:09:48.463 00:36:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:53.731 00:36:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 112586 00:09:53.731 00:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 112586 ']' 00:09:53.731 00:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 112586 00:09:53.731 00:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:53.731 00:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:53.731 00:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112586 00:09:53.731 killing process with pid 112586 00:09:53.731 00:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:53.731 00:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:53.731 00:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112586' 00:09:53.731 00:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 112586 00:09:53.731 00:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 112586 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:56.261 ************************************ 00:09:56.261 END TEST skip_rpc_with_json 00:09:56.261 ************************************ 00:09:56.261 00:09:56.261 real 0m11.427s 00:09:56.261 user 0m10.884s 00:09:56.261 sys 0m0.877s 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:56.261 00:36:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:56.261 00:36:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:56.261 00:36:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.261 00:36:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.261 ************************************ 00:09:56.261 START TEST skip_rpc_with_delay 00:09:56.261 ************************************ 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:56.261 [2024-07-25 00:36:18.621823] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:56.261 [2024-07-25 00:36:18.622266] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:56.261 00:09:56.261 real 0m0.168s 00:09:56.261 user 0m0.073s 00:09:56.261 sys 0m0.092s 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.261 00:36:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:56.261 ************************************ 00:09:56.261 END TEST skip_rpc_with_delay 00:09:56.261 ************************************ 00:09:56.261 00:36:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:56.261 00:36:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:56.261 00:36:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:56.261 00:36:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:56.261 00:36:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.261 00:36:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.261 ************************************ 00:09:56.261 START TEST exit_on_failed_rpc_init 00:09:56.261 ************************************ 00:09:56.262 00:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:09:56.262 00:36:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=112727 00:09:56.262 00:36:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:56.262 00:36:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 112727 00:09:56.262 00:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 112727 ']' 00:09:56.262 00:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.262 00:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.262 00:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.262 00:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.262 00:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:56.262 [2024-07-25 00:36:18.859418] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:09:56.262 [2024-07-25 00:36:18.859857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112727 ] 00:09:56.520 [2024-07-25 00:36:19.040700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.779 [2024-07-25 00:36:19.249056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:57.713 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:57.713 [2024-07-25 00:36:20.121942] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:09:57.713 [2024-07-25 00:36:20.122474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112757 ] 00:09:57.713 [2024-07-25 00:36:20.296876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.971 [2024-07-25 00:36:20.543933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.971 [2024-07-25 00:36:20.544161] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:57.971 [2024-07-25 00:36:20.544309] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:57.971 [2024-07-25 00:36:20.544413] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 112727 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 112727 ']' 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 112727 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:58.536 00:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112727 00:09:58.536 00:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:58.536 00:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:58.536 00:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112727' 00:09:58.536 killing process with pid 112727 00:09:58.536 00:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 112727 00:09:58.536 00:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 112727 00:10:01.080 ************************************ 00:10:01.080 END TEST exit_on_failed_rpc_init 00:10:01.080 ************************************ 00:10:01.080 00:10:01.080 real 0m4.719s 00:10:01.080 user 0m5.317s 00:10:01.080 sys 0m0.602s 00:10:01.080 00:36:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.080 00:36:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:01.080 00:36:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:01.080 ************************************ 00:10:01.080 END TEST skip_rpc 00:10:01.080 ************************************ 00:10:01.080 00:10:01.080 real 0m24.230s 00:10:01.080 user 0m23.512s 00:10:01.080 sys 0m2.175s 00:10:01.080 00:36:23 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.080 00:36:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.081 00:36:23 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:01.081 00:36:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:01.081 00:36:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.081 00:36:23 -- common/autotest_common.sh@10 -- # set +x 00:10:01.081 ************************************ 00:10:01.081 START TEST rpc_client 00:10:01.081 ************************************ 00:10:01.081 00:36:23 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:01.081 * Looking for test storage... 00:10:01.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:01.081 00:36:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:01.338 OK 00:10:01.338 00:36:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:01.338 ************************************ 00:10:01.338 END TEST rpc_client 00:10:01.338 ************************************ 00:10:01.338 00:10:01.338 real 0m0.196s 00:10:01.338 user 0m0.084s 00:10:01.338 sys 0m0.129s 00:10:01.338 00:36:23 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.338 00:36:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:01.338 00:36:23 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:01.338 00:36:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:01.338 00:36:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.338 00:36:23 -- common/autotest_common.sh@10 -- # set +x 00:10:01.338 ************************************ 00:10:01.338 START TEST json_config 00:10:01.338 ************************************ 00:10:01.339 00:36:23 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6a3c5ad6-686a-46d8-8211-d99d92781193 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6a3c5ad6-686a-46d8-8211-d99d92781193 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:01.339 00:36:23 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.339 00:36:23 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.339 00:36:23 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.339 00:36:23 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:01.339 00:36:23 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:01.339 00:36:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:01.339 00:36:23 json_config -- paths/export.sh@5 -- # export PATH 00:10:01.339 00:36:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@47 -- # : 0 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:01.339 00:36:23 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:01.339 INFO: JSON configuration test init 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:10:01.339 00:36:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:01.339 00:36:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:10:01.339 00:36:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:01.339 00:36:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:01.339 00:36:23 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:10:01.339 00:36:23 json_config -- json_config/common.sh@9 -- # local app=target 00:10:01.339 00:36:23 json_config -- json_config/common.sh@10 -- # shift 00:10:01.339 00:36:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:01.339 00:36:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:01.339 00:36:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:01.339 00:36:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:01.339 00:36:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:01.339 00:36:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=112920 00:10:01.339 00:36:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:01.339 Waiting for target to run... 00:10:01.339 00:36:23 json_config -- json_config/common.sh@25 -- # waitforlisten 112920 /var/tmp/spdk_tgt.sock 00:10:01.339 00:36:23 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:01.339 00:36:23 json_config -- common/autotest_common.sh@829 -- # '[' -z 112920 ']' 00:10:01.339 00:36:23 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:01.339 00:36:23 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.339 00:36:23 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:01.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:01.339 00:36:23 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.339 00:36:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:01.597 [2024-07-25 00:36:24.075656] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:10:01.597 [2024-07-25 00:36:24.075893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112920 ] 00:10:02.164 [2024-07-25 00:36:24.517744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.164 [2024-07-25 00:36:24.756506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.423 00:36:25 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.423 00:36:25 json_config -- common/autotest_common.sh@862 -- # return 0 00:10:02.423 00:10:02.423 00:36:25 json_config -- json_config/common.sh@26 -- # echo '' 00:10:02.423 00:36:25 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:10:02.423 00:36:25 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:10:02.423 00:36:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:02.423 00:36:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:02.423 00:36:25 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:10:02.423 00:36:25 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:10:02.423 00:36:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:02.423 00:36:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:02.681 00:36:25 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:02.681 00:36:25 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:10:02.681 00:36:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:03.615 00:36:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:03.615 00:36:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:10:03.615 00:36:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@48 -- # local get_types 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@51 -- # sort 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:10:03.615 00:36:26 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:10:03.615 00:36:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.615 00:36:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.873 00:36:26 json_config -- json_config/json_config.sh@59 -- # return 0 00:10:03.873 00:36:26 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:10:03.873 00:36:26 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:10:03.873 00:36:26 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:10:03.873 00:36:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:03.873 00:36:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.873 00:36:26 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:10:03.873 00:36:26 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:10:03.873 00:36:26 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:10:03.873 00:36:26 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:10:03.873 00:36:26 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:10:03.873 00:36:26 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:03.873 00:36:26 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:03.874 00:36:26 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:10:03.874 00:36:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:03.874 00:36:26 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:03.874 00:36:26 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:10:03.874 00:36:26 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:03.874 00:36:26 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:03.874 00:36:26 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:10:03.874 00:36:26 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:10:03.874 00:36:26 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:10:03.874 00:36:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:10:04.134 Nvme0n1p0 Nvme0n1p1 00:10:04.134 00:36:26 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:10:04.134 00:36:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:10:04.405 [2024-07-25 00:36:27.026564] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:04.405 [2024-07-25 00:36:27.026685] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:04.405 00:10:04.405 00:36:27 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:10:04.405 00:36:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:10:04.664 Malloc3 00:10:04.664 00:36:27 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:04.664 00:36:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:04.922 [2024-07-25 00:36:27.471980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:04.922 [2024-07-25 00:36:27.472078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.923 [2024-07-25 00:36:27.472121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:04.923 [2024-07-25 00:36:27.472144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.923 [2024-07-25 00:36:27.474692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.923 [2024-07-25 00:36:27.474766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:04.923 PTBdevFromMalloc3 00:10:04.923 00:36:27 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:10:04.923 00:36:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:10:05.181 Null0 00:10:05.181 00:36:27 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:10:05.181 00:36:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:10:05.439 Malloc0 00:10:05.439 00:36:28 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:10:05.439 00:36:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:10:05.697 Malloc1 00:10:05.697 00:36:28 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:10:05.697 00:36:28 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:10:06.263 102400+0 records in 00:10:06.263 102400+0 records out 00:10:06.263 104857600 bytes (105 MB, 100 MiB) copied, 0.372802 s, 281 MB/s 00:10:06.263 00:36:28 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:10:06.263 00:36:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:10:06.263 aio_disk 00:10:06.263 00:36:28 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:10:06.263 00:36:28 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:06.263 00:36:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:06.521 57333d73-9072-4c52-a59d-8453812e5cfa 00:10:06.521 00:36:29 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:10:06.521 00:36:29 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:10:06.521 00:36:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:10:06.778 00:36:29 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:10:06.779 00:36:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:10:07.036 00:36:29 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:07.036 00:36:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:07.602 00:36:29 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:07.602 00:36:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:07.602 00:36:30 json_config -- json_config/json_config.sh@161 -- # [[ 0 -eq 1 ]] 00:10:07.602 00:36:30 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:10:07.602 00:36:30 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:533bebb8-2172-45dc-bbcd-85dec931ff43 bdev_register:981ee53f-cb29-4a03-97e5-fe8d7e9eb951 bdev_register:0d200b32-e0cc-4a10-b697-53e319fec9db bdev_register:03dbbd61-c3d2-4300-84ef-94635fa06e71 00:10:07.602 00:36:30 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:10:07.602 00:36:30 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:10:07.602 00:36:30 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:10:07.602 00:36:30 json_config -- json_config/json_config.sh@75 -- # sort 00:10:07.602 00:36:30 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:533bebb8-2172-45dc-bbcd-85dec931ff43 bdev_register:981ee53f-cb29-4a03-97e5-fe8d7e9eb951 bdev_register:0d200b32-e0cc-4a10-b697-53e319fec9db bdev_register:03dbbd61-c3d2-4300-84ef-94635fa06e71 00:10:07.602 00:36:30 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:10:07.602 00:36:30 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:10:07.603 00:36:30 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:10:07.603 00:36:30 json_config -- json_config/json_config.sh@76 -- # sort 00:10:07.603 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.603 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.603 00:36:30 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:10:07.603 00:36:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:07.603 00:36:30 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:533bebb8-2172-45dc-bbcd-85dec931ff43 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:981ee53f-cb29-4a03-97e5-fe8d7e9eb951 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:0d200b32-e0cc-4a10-b697-53e319fec9db 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:03dbbd61-c3d2-4300-84ef-94635fa06e71 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:03dbbd61-c3d2-4300-84ef-94635fa06e71 bdev_register:0d200b32-e0cc-4a10-b697-53e319fec9db bdev_register:533bebb8-2172-45dc-bbcd-85dec931ff43 bdev_register:981ee53f-cb29-4a03-97e5-fe8d7e9eb951 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\3\d\b\b\d\6\1\-\c\3\d\2\-\4\3\0\0\-\8\4\e\f\-\9\4\6\3\5\f\a\0\6\e\7\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\d\2\0\0\b\3\2\-\e\0\c\c\-\4\a\1\0\-\b\6\9\7\-\5\3\e\3\1\9\f\e\c\9\d\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\3\3\b\e\b\b\8\-\2\1\7\2\-\4\5\d\c\-\b\b\c\d\-\8\5\d\e\c\9\3\1\f\f\4\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\8\1\e\e\5\3\f\-\c\b\2\9\-\4\a\0\3\-\9\7\e\5\-\f\e\8\d\7\e\9\e\b\9\5\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@90 -- # cat 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:03dbbd61-c3d2-4300-84ef-94635fa06e71 bdev_register:0d200b32-e0cc-4a10-b697-53e319fec9db bdev_register:533bebb8-2172-45dc-bbcd-85dec931ff43 bdev_register:981ee53f-cb29-4a03-97e5-fe8d7e9eb951 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:10:07.861 Expected events matched: 00:10:07.861 bdev_register:03dbbd61-c3d2-4300-84ef-94635fa06e71 00:10:07.861 bdev_register:0d200b32-e0cc-4a10-b697-53e319fec9db 00:10:07.861 bdev_register:533bebb8-2172-45dc-bbcd-85dec931ff43 00:10:07.861 bdev_register:981ee53f-cb29-4a03-97e5-fe8d7e9eb951 00:10:07.861 bdev_register:Malloc0 00:10:07.861 bdev_register:Malloc0p0 00:10:07.861 bdev_register:Malloc0p1 00:10:07.861 bdev_register:Malloc0p2 00:10:07.861 bdev_register:Malloc1 00:10:07.861 bdev_register:Malloc3 00:10:07.861 bdev_register:Null0 00:10:07.861 bdev_register:Nvme0n1 00:10:07.861 bdev_register:Nvme0n1p0 00:10:07.861 bdev_register:Nvme0n1p1 00:10:07.861 bdev_register:PTBdevFromMalloc3 00:10:07.861 bdev_register:aio_disk 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:10:07.861 00:36:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.861 00:36:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:10:07.861 00:36:30 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:10:07.861 00:36:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.862 00:36:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:08.119 00:36:30 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:10:08.119 00:36:30 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:08.120 00:36:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:08.120 MallocBdevForConfigChangeCheck 00:10:08.120 00:36:30 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:10:08.120 00:36:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:08.120 00:36:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:08.120 00:36:30 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:10:08.120 00:36:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:08.730 INFO: shutting down applications... 00:10:08.731 00:36:31 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:10:08.731 00:36:31 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:10:08.731 00:36:31 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:10:08.731 00:36:31 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:10:08.731 00:36:31 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:08.731 [2024-07-25 00:36:31.251864] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:08.988 Calling clear_vhost_scsi_subsystem 00:10:08.988 Calling clear_iscsi_subsystem 00:10:08.988 Calling clear_vhost_blk_subsystem 00:10:08.988 Calling clear_nbd_subsystem 00:10:08.988 Calling clear_nvmf_subsystem 00:10:08.988 Calling clear_bdev_subsystem 00:10:08.988 00:36:31 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:08.988 00:36:31 json_config -- json_config/json_config.sh@347 -- # count=100 00:10:08.988 00:36:31 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:10:08.988 00:36:31 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:08.988 00:36:31 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:08.988 00:36:31 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:09.245 00:36:31 json_config -- json_config/json_config.sh@349 -- # break 00:10:09.245 00:36:31 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:10:09.245 00:36:31 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:10:09.245 00:36:31 json_config -- json_config/common.sh@31 -- # local app=target 00:10:09.245 00:36:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:09.245 00:36:31 json_config -- json_config/common.sh@35 -- # [[ -n 112920 ]] 00:10:09.245 00:36:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 112920 00:10:09.245 00:36:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:09.245 00:36:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:09.245 00:36:31 json_config -- json_config/common.sh@41 -- # kill -0 112920 00:10:09.245 00:36:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:09.811 00:36:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:09.811 00:36:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:09.811 00:36:32 json_config -- json_config/common.sh@41 -- # kill -0 112920 00:10:09.811 00:36:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:10.378 00:36:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:10.378 00:36:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:10.378 00:36:32 json_config -- json_config/common.sh@41 -- # kill -0 112920 00:10:10.378 00:36:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:10.949 00:36:33 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:10.949 00:36:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:10.949 00:36:33 json_config -- json_config/common.sh@41 -- # kill -0 112920 00:10:10.949 00:36:33 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:10.949 00:36:33 json_config -- json_config/common.sh@43 -- # break 00:10:10.949 00:36:33 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:10.949 SPDK target shutdown done 00:10:10.949 00:36:33 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:10.949 INFO: relaunching applications... 00:10:10.949 00:36:33 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:10:10.949 00:36:33 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:10.949 00:36:33 json_config -- json_config/common.sh@9 -- # local app=target 00:10:10.949 00:36:33 json_config -- json_config/common.sh@10 -- # shift 00:10:10.949 00:36:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:10.949 00:36:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:10.949 00:36:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:10.949 00:36:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:10.949 00:36:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:10.949 00:36:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=113197 00:10:10.949 00:36:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:10.949 Waiting for target to run... 00:10:10.949 00:36:33 json_config -- json_config/common.sh@25 -- # waitforlisten 113197 /var/tmp/spdk_tgt.sock 00:10:10.949 00:36:33 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:10.949 00:36:33 json_config -- common/autotest_common.sh@829 -- # '[' -z 113197 ']' 00:10:10.949 00:36:33 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:10.949 00:36:33 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:10.949 00:36:33 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:10.949 00:36:33 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.949 00:36:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:10.949 [2024-07-25 00:36:33.442435] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:10:10.949 [2024-07-25 00:36:33.442704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113197 ] 00:10:11.514 [2024-07-25 00:36:34.073977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.771 [2024-07-25 00:36:34.288722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.704 [2024-07-25 00:36:35.042915] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:12.704 [2024-07-25 00:36:35.043039] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:12.704 [2024-07-25 00:36:35.050858] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:12.704 [2024-07-25 00:36:35.050906] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:12.704 [2024-07-25 00:36:35.058878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:12.704 [2024-07-25 00:36:35.058926] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:12.704 [2024-07-25 00:36:35.058956] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:12.704 [2024-07-25 00:36:35.156480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:12.704 [2024-07-25 00:36:35.156601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.704 [2024-07-25 00:36:35.156633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:12.704 [2024-07-25 00:36:35.156663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.704 [2024-07-25 00:36:35.157166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.704 [2024-07-25 00:36:35.157220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:12.704 00:36:35 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.704 00:36:35 json_config -- common/autotest_common.sh@862 -- # return 0 00:10:12.704 00:10:12.704 00:36:35 json_config -- json_config/common.sh@26 -- # echo '' 00:10:12.704 00:36:35 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:10:12.704 INFO: Checking if target configuration is the same... 00:10:12.704 00:36:35 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:12.704 00:36:35 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:12.704 00:36:35 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:10:12.704 00:36:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:12.704 + '[' 2 -ne 2 ']' 00:10:12.704 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:12.704 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:12.704 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:12.704 +++ basename /dev/fd/62 00:10:12.704 ++ mktemp /tmp/62.XXX 00:10:12.704 + tmp_file_1=/tmp/62.3pR 00:10:12.704 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:12.704 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:12.704 + tmp_file_2=/tmp/spdk_tgt_config.json.XpN 00:10:12.704 + ret=0 00:10:12.704 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:13.271 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:13.271 + diff -u /tmp/62.3pR /tmp/spdk_tgt_config.json.XpN 00:10:13.271 INFO: JSON config files are the same 00:10:13.271 + echo 'INFO: JSON config files are the same' 00:10:13.271 + rm /tmp/62.3pR /tmp/spdk_tgt_config.json.XpN 00:10:13.271 + exit 0 00:10:13.271 00:36:35 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:10:13.271 00:36:35 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:13.271 INFO: changing configuration and checking if this can be detected... 00:10:13.271 00:36:35 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:13.271 00:36:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:13.529 00:36:36 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:10:13.529 00:36:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:13.529 00:36:36 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:13.529 + '[' 2 -ne 2 ']' 00:10:13.530 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:13.530 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:13.530 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:13.530 +++ basename /dev/fd/62 00:10:13.530 ++ mktemp /tmp/62.XXX 00:10:13.530 + tmp_file_1=/tmp/62.sYd 00:10:13.530 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:13.530 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:13.530 + tmp_file_2=/tmp/spdk_tgt_config.json.RXE 00:10:13.530 + ret=0 00:10:13.530 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:14.097 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:14.097 + diff -u /tmp/62.sYd /tmp/spdk_tgt_config.json.RXE 00:10:14.097 + ret=1 00:10:14.097 + echo '=== Start of file: /tmp/62.sYd ===' 00:10:14.097 + cat /tmp/62.sYd 00:10:14.097 + echo '=== End of file: /tmp/62.sYd ===' 00:10:14.097 + echo '' 00:10:14.097 + echo '=== Start of file: /tmp/spdk_tgt_config.json.RXE ===' 00:10:14.097 + cat /tmp/spdk_tgt_config.json.RXE 00:10:14.097 + echo '=== End of file: /tmp/spdk_tgt_config.json.RXE ===' 00:10:14.097 + echo '' 00:10:14.097 + rm /tmp/62.sYd /tmp/spdk_tgt_config.json.RXE 00:10:14.097 + exit 1 00:10:14.097 INFO: configuration change detected. 00:10:14.097 00:36:36 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:10:14.097 00:36:36 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:10:14.097 00:36:36 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:10:14.097 00:36:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:14.097 00:36:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:14.097 00:36:36 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:10:14.097 00:36:36 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:10:14.097 00:36:36 json_config -- json_config/json_config.sh@321 -- # [[ -n 113197 ]] 00:10:14.097 00:36:36 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:10:14.097 00:36:36 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:10:14.097 00:36:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:14.097 00:36:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:14.097 00:36:36 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:10:14.097 00:36:36 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:14.097 00:36:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:14.355 00:36:36 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:14.355 00:36:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:14.613 00:36:37 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:14.613 00:36:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:14.871 00:36:37 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:14.871 00:36:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:15.129 00:36:37 json_config -- json_config/json_config.sh@197 -- # uname -s 00:10:15.129 00:36:37 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:10:15.129 00:36:37 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:10:15.129 00:36:37 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:10:15.129 00:36:37 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:15.129 00:36:37 json_config -- json_config/json_config.sh@327 -- # killprocess 113197 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@948 -- # '[' -z 113197 ']' 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@952 -- # kill -0 113197 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@953 -- # uname 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113197 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113197' 00:10:15.129 killing process with pid 113197 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@967 -- # kill 113197 00:10:15.129 00:36:37 json_config -- common/autotest_common.sh@972 -- # wait 113197 00:10:16.503 00:36:38 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:16.503 00:36:38 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:10:16.503 00:36:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:16.503 00:36:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:16.503 00:36:39 json_config -- json_config/json_config.sh@332 -- # return 0 00:10:16.503 INFO: Success 00:10:16.503 00:36:39 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:10:16.503 00:10:16.503 real 0m15.144s 00:10:16.503 user 0m20.598s 00:10:16.503 sys 0m3.263s 00:10:16.503 00:36:39 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.503 00:36:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:16.503 ************************************ 00:10:16.503 END TEST json_config 00:10:16.503 ************************************ 00:10:16.503 00:36:39 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:16.503 00:36:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:16.503 00:36:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.503 00:36:39 -- common/autotest_common.sh@10 -- # set +x 00:10:16.503 ************************************ 00:10:16.503 START TEST json_config_extra_key 00:10:16.503 ************************************ 00:10:16.503 00:36:39 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:16.503 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.503 00:36:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:16.503 00:36:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4ea83924-a876-48b4-8d2e-fcb064a42e7e 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4ea83924-a876-48b4-8d2e-fcb064a42e7e 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:16.504 00:36:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.762 00:36:39 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.762 00:36:39 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.762 00:36:39 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.762 00:36:39 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.762 00:36:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:16.762 00:36:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:16.762 00:36:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:16.762 00:36:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:16.763 00:36:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:16.763 00:36:39 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:10:16.763 00:36:39 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:16.763 00:36:39 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:16.763 00:36:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.763 00:36:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.763 00:36:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.763 00:36:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:16.763 00:36:39 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:16.763 00:36:39 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:16.763 INFO: launching applications... 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:16.763 00:36:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:16.763 00:36:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:16.763 00:36:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:16.763 00:36:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:16.763 00:36:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:16.763 00:36:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:16.763 00:36:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:16.763 00:36:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:16.763 00:36:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=113383 00:10:16.763 Waiting for target to run... 00:10:16.763 00:36:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:16.763 00:36:39 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:16.763 00:36:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 113383 /var/tmp/spdk_tgt.sock 00:10:16.763 00:36:39 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 113383 ']' 00:10:16.763 00:36:39 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:16.763 00:36:39 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:16.763 00:36:39 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:16.763 00:36:39 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.763 00:36:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:16.763 [2024-07-25 00:36:39.242829] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:10:16.763 [2024-07-25 00:36:39.243013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113383 ] 00:10:17.021 [2024-07-25 00:36:39.648367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.280 [2024-07-25 00:36:39.907172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.285 00:36:40 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:18.285 00:10:18.285 00:36:40 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:10:18.285 00:36:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:18.285 INFO: shutting down applications... 00:10:18.285 00:36:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:18.285 00:36:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:18.285 00:36:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:18.285 00:36:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:18.285 00:36:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 113383 ]] 00:10:18.285 00:36:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 113383 00:10:18.285 00:36:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:18.285 00:36:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:18.285 00:36:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113383 00:10:18.285 00:36:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:18.543 00:36:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:18.543 00:36:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:18.543 00:36:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113383 00:10:18.543 00:36:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:19.110 00:36:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:19.110 00:36:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:19.110 00:36:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113383 00:10:19.110 00:36:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:19.676 00:36:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:19.676 00:36:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:19.676 00:36:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113383 00:10:19.676 00:36:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:20.242 00:36:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:20.243 00:36:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:20.243 00:36:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113383 00:10:20.243 00:36:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:20.810 00:36:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:20.810 00:36:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:20.810 00:36:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113383 00:10:20.810 00:36:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:21.068 00:36:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:21.068 00:36:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:21.068 00:36:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113383 00:10:21.068 00:36:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:21.631 00:36:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:21.631 00:36:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:21.631 00:36:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113383 00:10:21.631 00:36:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:21.631 00:36:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:21.631 00:36:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:21.631 SPDK target shutdown done 00:10:21.631 00:36:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:21.631 Success 00:10:21.631 00:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:21.631 00:10:21.631 real 0m5.101s 00:10:21.631 user 0m4.561s 00:10:21.631 sys 0m0.536s 00:10:21.631 00:36:44 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.631 00:36:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:21.631 ************************************ 00:10:21.631 END TEST json_config_extra_key 00:10:21.631 ************************************ 00:10:21.631 00:36:44 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:21.631 00:36:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:21.631 00:36:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.631 00:36:44 -- common/autotest_common.sh@10 -- # set +x 00:10:21.631 ************************************ 00:10:21.631 START TEST alias_rpc 00:10:21.631 ************************************ 00:10:21.631 00:36:44 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:21.889 * Looking for test storage... 00:10:21.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:21.889 00:36:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:21.889 00:36:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=113506 00:10:21.889 00:36:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 113506 00:10:21.889 00:36:44 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 113506 ']' 00:10:21.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.889 00:36:44 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.889 00:36:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:21.889 00:36:44 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.889 00:36:44 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.889 00:36:44 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.889 00:36:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.889 [2024-07-25 00:36:44.442469] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:10:21.889 [2024-07-25 00:36:44.442691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113506 ] 00:10:22.147 [2024-07-25 00:36:44.620691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.405 [2024-07-25 00:36:44.883389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:23.341 00:36:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:23.341 00:36:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 113506 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 113506 ']' 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 113506 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113506 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:23.341 killing process with pid 113506 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113506' 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@967 -- # kill 113506 00:10:23.341 00:36:45 alias_rpc -- common/autotest_common.sh@972 -- # wait 113506 00:10:25.889 00:10:25.889 real 0m4.278s 00:10:25.889 user 0m4.398s 00:10:25.889 sys 0m0.510s 00:10:25.889 ************************************ 00:10:25.889 END TEST alias_rpc 00:10:25.889 ************************************ 00:10:25.889 00:36:48 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:25.889 00:36:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.148 00:36:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:10:26.148 00:36:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:26.148 00:36:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:26.148 00:36:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.148 00:36:48 -- common/autotest_common.sh@10 -- # set +x 00:10:26.148 ************************************ 00:10:26.148 START TEST spdkcli_tcp 00:10:26.148 ************************************ 00:10:26.148 00:36:48 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:26.148 * Looking for test storage... 00:10:26.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:26.148 00:36:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:26.148 00:36:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:26.148 00:36:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:26.148 00:36:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:26.148 00:36:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:26.148 00:36:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:26.148 00:36:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:26.148 00:36:48 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.148 00:36:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:26.148 00:36:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=113628 00:10:26.148 00:36:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 113628 00:10:26.148 00:36:48 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 113628 ']' 00:10:26.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.148 00:36:48 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.148 00:36:48 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.148 00:36:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.148 00:36:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.148 00:36:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:26.148 00:36:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:26.406 [2024-07-25 00:36:48.821391] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:10:26.406 [2024-07-25 00:36:48.821663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113628 ] 00:10:26.406 [2024-07-25 00:36:49.009940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:26.666 [2024-07-25 00:36:49.269304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.666 [2024-07-25 00:36:49.269305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.603 00:36:50 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.603 00:36:50 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:10:27.603 00:36:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:27.603 00:36:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=113652 00:10:27.603 00:36:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:27.862 [ 00:10:27.862 "spdk_get_version", 00:10:27.862 "rpc_get_methods", 00:10:27.862 "keyring_get_keys", 00:10:27.862 "trace_get_info", 00:10:27.862 "trace_get_tpoint_group_mask", 00:10:27.862 "trace_disable_tpoint_group", 00:10:27.862 "trace_enable_tpoint_group", 00:10:27.862 "trace_clear_tpoint_mask", 00:10:27.862 "trace_set_tpoint_mask", 00:10:27.862 "framework_get_pci_devices", 00:10:27.862 "framework_get_config", 00:10:27.862 "framework_get_subsystems", 00:10:27.862 "iobuf_get_stats", 00:10:27.862 "iobuf_set_options", 00:10:27.862 "sock_get_default_impl", 00:10:27.862 "sock_set_default_impl", 00:10:27.862 "sock_impl_set_options", 00:10:27.862 "sock_impl_get_options", 00:10:27.862 "vmd_rescan", 00:10:27.862 "vmd_remove_device", 00:10:27.862 "vmd_enable", 00:10:27.862 "accel_get_stats", 00:10:27.862 "accel_set_options", 00:10:27.862 "accel_set_driver", 00:10:27.862 "accel_crypto_key_destroy", 00:10:27.862 "accel_crypto_keys_get", 00:10:27.862 "accel_crypto_key_create", 00:10:27.862 "accel_assign_opc", 00:10:27.862 "accel_get_module_info", 00:10:27.862 "accel_get_opc_assignments", 00:10:27.862 "notify_get_notifications", 00:10:27.862 "notify_get_types", 00:10:27.862 "bdev_get_histogram", 00:10:27.862 "bdev_enable_histogram", 00:10:27.862 "bdev_set_qos_limit", 00:10:27.862 "bdev_set_qd_sampling_period", 00:10:27.862 "bdev_get_bdevs", 00:10:27.862 "bdev_reset_iostat", 00:10:27.862 "bdev_get_iostat", 00:10:27.862 "bdev_examine", 00:10:27.862 "bdev_wait_for_examine", 00:10:27.862 "bdev_set_options", 00:10:27.862 "scsi_get_devices", 00:10:27.862 "thread_set_cpumask", 00:10:27.862 "framework_get_governor", 00:10:27.862 "framework_get_scheduler", 00:10:27.862 "framework_set_scheduler", 00:10:27.862 "framework_get_reactors", 00:10:27.862 "thread_get_io_channels", 00:10:27.862 "thread_get_pollers", 00:10:27.862 "thread_get_stats", 00:10:27.862 "framework_monitor_context_switch", 00:10:27.862 "spdk_kill_instance", 00:10:27.862 "log_enable_timestamps", 00:10:27.862 "log_get_flags", 00:10:27.862 "log_clear_flag", 00:10:27.862 "log_set_flag", 00:10:27.862 "log_get_level", 00:10:27.862 "log_set_level", 00:10:27.862 "log_get_print_level", 00:10:27.862 "log_set_print_level", 00:10:27.862 "framework_enable_cpumask_locks", 00:10:27.862 "framework_disable_cpumask_locks", 00:10:27.862 "framework_wait_init", 00:10:27.862 "framework_start_init", 00:10:27.862 "virtio_blk_create_transport", 00:10:27.862 "virtio_blk_get_transports", 00:10:27.862 "vhost_controller_set_coalescing", 00:10:27.862 "vhost_get_controllers", 00:10:27.862 "vhost_delete_controller", 00:10:27.862 "vhost_create_blk_controller", 00:10:27.862 "vhost_scsi_controller_remove_target", 00:10:27.862 "vhost_scsi_controller_add_target", 00:10:27.862 "vhost_start_scsi_controller", 00:10:27.862 "vhost_create_scsi_controller", 00:10:27.862 "nbd_get_disks", 00:10:27.862 "nbd_stop_disk", 00:10:27.862 "nbd_start_disk", 00:10:27.862 "env_dpdk_get_mem_stats", 00:10:27.863 "nvmf_update_mdns_prr", 00:10:27.863 "nvmf_stop_mdns_prr", 00:10:27.863 "nvmf_publish_mdns_prr", 00:10:27.863 "nvmf_subsystem_get_listeners", 00:10:27.863 "nvmf_subsystem_get_qpairs", 00:10:27.863 "nvmf_subsystem_get_controllers", 00:10:27.863 "nvmf_get_stats", 00:10:27.863 "nvmf_get_transports", 00:10:27.863 "nvmf_create_transport", 00:10:27.863 "nvmf_get_targets", 00:10:27.863 "nvmf_delete_target", 00:10:27.863 "nvmf_create_target", 00:10:27.863 "nvmf_subsystem_allow_any_host", 00:10:27.863 "nvmf_subsystem_remove_host", 00:10:27.863 "nvmf_subsystem_add_host", 00:10:27.863 "nvmf_ns_remove_host", 00:10:27.863 "nvmf_ns_add_host", 00:10:27.863 "nvmf_subsystem_remove_ns", 00:10:27.863 "nvmf_subsystem_add_ns", 00:10:27.863 "nvmf_subsystem_listener_set_ana_state", 00:10:27.863 "nvmf_discovery_get_referrals", 00:10:27.863 "nvmf_discovery_remove_referral", 00:10:27.863 "nvmf_discovery_add_referral", 00:10:27.863 "nvmf_subsystem_remove_listener", 00:10:27.863 "nvmf_subsystem_add_listener", 00:10:27.863 "nvmf_delete_subsystem", 00:10:27.863 "nvmf_create_subsystem", 00:10:27.863 "nvmf_get_subsystems", 00:10:27.863 "nvmf_set_crdt", 00:10:27.863 "nvmf_set_config", 00:10:27.863 "nvmf_set_max_subsystems", 00:10:27.863 "iscsi_get_histogram", 00:10:27.863 "iscsi_enable_histogram", 00:10:27.863 "iscsi_set_options", 00:10:27.863 "iscsi_get_auth_groups", 00:10:27.863 "iscsi_auth_group_remove_secret", 00:10:27.863 "iscsi_auth_group_add_secret", 00:10:27.863 "iscsi_delete_auth_group", 00:10:27.863 "iscsi_create_auth_group", 00:10:27.863 "iscsi_set_discovery_auth", 00:10:27.863 "iscsi_get_options", 00:10:27.863 "iscsi_target_node_request_logout", 00:10:27.863 "iscsi_target_node_set_redirect", 00:10:27.863 "iscsi_target_node_set_auth", 00:10:27.863 "iscsi_target_node_add_lun", 00:10:27.863 "iscsi_get_stats", 00:10:27.863 "iscsi_get_connections", 00:10:27.863 "iscsi_portal_group_set_auth", 00:10:27.863 "iscsi_start_portal_group", 00:10:27.863 "iscsi_delete_portal_group", 00:10:27.863 "iscsi_create_portal_group", 00:10:27.863 "iscsi_get_portal_groups", 00:10:27.863 "iscsi_delete_target_node", 00:10:27.863 "iscsi_target_node_remove_pg_ig_maps", 00:10:27.863 "iscsi_target_node_add_pg_ig_maps", 00:10:27.863 "iscsi_create_target_node", 00:10:27.863 "iscsi_get_target_nodes", 00:10:27.863 "iscsi_delete_initiator_group", 00:10:27.863 "iscsi_initiator_group_remove_initiators", 00:10:27.863 "iscsi_initiator_group_add_initiators", 00:10:27.863 "iscsi_create_initiator_group", 00:10:27.863 "iscsi_get_initiator_groups", 00:10:27.863 "keyring_linux_set_options", 00:10:27.863 "keyring_file_remove_key", 00:10:27.863 "keyring_file_add_key", 00:10:27.863 "iaa_scan_accel_module", 00:10:27.863 "dsa_scan_accel_module", 00:10:27.863 "ioat_scan_accel_module", 00:10:27.863 "accel_error_inject_error", 00:10:27.863 "bdev_iscsi_delete", 00:10:27.863 "bdev_iscsi_create", 00:10:27.863 "bdev_iscsi_set_options", 00:10:27.863 "bdev_virtio_attach_controller", 00:10:27.863 "bdev_virtio_scsi_get_devices", 00:10:27.863 "bdev_virtio_detach_controller", 00:10:27.863 "bdev_virtio_blk_set_hotplug", 00:10:27.863 "bdev_ftl_set_property", 00:10:27.863 "bdev_ftl_get_properties", 00:10:27.863 "bdev_ftl_get_stats", 00:10:27.863 "bdev_ftl_unmap", 00:10:27.863 "bdev_ftl_unload", 00:10:27.863 "bdev_ftl_delete", 00:10:27.863 "bdev_ftl_load", 00:10:27.863 "bdev_ftl_create", 00:10:27.863 "bdev_aio_delete", 00:10:27.863 "bdev_aio_rescan", 00:10:27.863 "bdev_aio_create", 00:10:27.863 "blobfs_create", 00:10:27.863 "blobfs_detect", 00:10:27.863 "blobfs_set_cache_size", 00:10:27.863 "bdev_zone_block_delete", 00:10:27.863 "bdev_zone_block_create", 00:10:27.863 "bdev_delay_delete", 00:10:27.863 "bdev_delay_create", 00:10:27.863 "bdev_delay_update_latency", 00:10:27.863 "bdev_split_delete", 00:10:27.863 "bdev_split_create", 00:10:27.863 "bdev_error_inject_error", 00:10:27.863 "bdev_error_delete", 00:10:27.863 "bdev_error_create", 00:10:27.863 "bdev_raid_set_options", 00:10:27.863 "bdev_raid_remove_base_bdev", 00:10:27.863 "bdev_raid_add_base_bdev", 00:10:27.863 "bdev_raid_delete", 00:10:27.863 "bdev_raid_create", 00:10:27.863 "bdev_raid_get_bdevs", 00:10:27.863 "bdev_lvol_set_parent_bdev", 00:10:27.863 "bdev_lvol_set_parent", 00:10:27.863 "bdev_lvol_check_shallow_copy", 00:10:27.863 "bdev_lvol_start_shallow_copy", 00:10:27.863 "bdev_lvol_grow_lvstore", 00:10:27.863 "bdev_lvol_get_lvols", 00:10:27.863 "bdev_lvol_get_lvstores", 00:10:27.863 "bdev_lvol_delete", 00:10:27.863 "bdev_lvol_set_read_only", 00:10:27.863 "bdev_lvol_resize", 00:10:27.863 "bdev_lvol_decouple_parent", 00:10:27.863 "bdev_lvol_inflate", 00:10:27.863 "bdev_lvol_rename", 00:10:27.863 "bdev_lvol_clone_bdev", 00:10:27.863 "bdev_lvol_clone", 00:10:27.863 "bdev_lvol_snapshot", 00:10:27.863 "bdev_lvol_create", 00:10:27.863 "bdev_lvol_delete_lvstore", 00:10:27.863 "bdev_lvol_rename_lvstore", 00:10:27.863 "bdev_lvol_create_lvstore", 00:10:27.863 "bdev_passthru_delete", 00:10:27.863 "bdev_passthru_create", 00:10:27.863 "bdev_nvme_cuse_unregister", 00:10:27.863 "bdev_nvme_cuse_register", 00:10:27.863 "bdev_opal_new_user", 00:10:27.863 "bdev_opal_set_lock_state", 00:10:27.863 "bdev_opal_delete", 00:10:27.863 "bdev_opal_get_info", 00:10:27.863 "bdev_opal_create", 00:10:27.863 "bdev_nvme_opal_revert", 00:10:27.863 "bdev_nvme_opal_init", 00:10:27.863 "bdev_nvme_send_cmd", 00:10:27.863 "bdev_nvme_get_path_iostat", 00:10:27.863 "bdev_nvme_get_mdns_discovery_info", 00:10:27.863 "bdev_nvme_stop_mdns_discovery", 00:10:27.863 "bdev_nvme_start_mdns_discovery", 00:10:27.863 "bdev_nvme_set_multipath_policy", 00:10:27.863 "bdev_nvme_set_preferred_path", 00:10:27.863 "bdev_nvme_get_io_paths", 00:10:27.863 "bdev_nvme_remove_error_injection", 00:10:27.863 "bdev_nvme_add_error_injection", 00:10:27.863 "bdev_nvme_get_discovery_info", 00:10:27.863 "bdev_nvme_stop_discovery", 00:10:27.863 "bdev_nvme_start_discovery", 00:10:27.863 "bdev_nvme_get_controller_health_info", 00:10:27.863 "bdev_nvme_disable_controller", 00:10:27.863 "bdev_nvme_enable_controller", 00:10:27.863 "bdev_nvme_reset_controller", 00:10:27.863 "bdev_nvme_get_transport_statistics", 00:10:27.863 "bdev_nvme_apply_firmware", 00:10:27.863 "bdev_nvme_detach_controller", 00:10:27.863 "bdev_nvme_get_controllers", 00:10:27.863 "bdev_nvme_attach_controller", 00:10:27.863 "bdev_nvme_set_hotplug", 00:10:27.863 "bdev_nvme_set_options", 00:10:27.863 "bdev_null_resize", 00:10:27.863 "bdev_null_delete", 00:10:27.863 "bdev_null_create", 00:10:27.863 "bdev_malloc_delete", 00:10:27.863 "bdev_malloc_create" 00:10:27.863 ] 00:10:27.863 00:36:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:27.863 00:36:50 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.863 00:36:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:28.122 00:36:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:28.122 00:36:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 113628 00:10:28.122 00:36:50 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 113628 ']' 00:10:28.122 00:36:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 113628 00:10:28.122 00:36:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:10:28.122 00:36:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:28.122 00:36:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113628 00:10:28.122 00:36:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:28.122 killing process with pid 113628 00:10:28.122 00:36:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:28.122 00:36:50 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113628' 00:10:28.122 00:36:50 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 113628 00:10:28.122 00:36:50 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 113628 00:10:31.409 00:10:31.409 real 0m4.750s 00:10:31.409 user 0m8.186s 00:10:31.409 sys 0m0.833s 00:10:31.409 ************************************ 00:10:31.409 END TEST spdkcli_tcp 00:10:31.409 ************************************ 00:10:31.409 00:36:53 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.409 00:36:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.409 00:36:53 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:31.409 00:36:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:31.409 00:36:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.409 00:36:53 -- common/autotest_common.sh@10 -- # set +x 00:10:31.409 ************************************ 00:10:31.409 START TEST dpdk_mem_utility 00:10:31.409 ************************************ 00:10:31.409 00:36:53 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:31.409 * Looking for test storage... 00:10:31.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:31.409 00:36:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:31.409 00:36:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=113752 00:10:31.409 00:36:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 113752 00:10:31.409 00:36:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:31.409 00:36:53 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 113752 ']' 00:10:31.409 00:36:53 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.409 00:36:53 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.409 00:36:53 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.409 00:36:53 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.409 00:36:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:31.409 [2024-07-25 00:36:53.625511] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:10:31.409 [2024-07-25 00:36:53.626511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113752 ] 00:10:31.409 [2024-07-25 00:36:53.809858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.668 [2024-07-25 00:36:54.070868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.607 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.607 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:10:32.607 00:36:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:32.607 00:36:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:32.607 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.607 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:32.607 { 00:10:32.607 "filename": "/tmp/spdk_mem_dump.txt" 00:10:32.607 } 00:10:32.607 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.607 00:36:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:32.607 DPDK memory size 820.000000 MiB in 1 heap(s) 00:10:32.607 1 heaps totaling size 820.000000 MiB 00:10:32.607 size: 820.000000 MiB heap id: 0 00:10:32.607 end heaps---------- 00:10:32.607 8 mempools totaling size 598.116089 MiB 00:10:32.607 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:32.607 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:32.607 size: 84.521057 MiB name: bdev_io_113752 00:10:32.607 size: 51.011292 MiB name: evtpool_113752 00:10:32.607 size: 50.003479 MiB name: msgpool_113752 00:10:32.607 size: 21.763794 MiB name: PDU_Pool 00:10:32.607 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:32.607 size: 0.026123 MiB name: Session_Pool 00:10:32.607 end mempools------- 00:10:32.607 6 memzones totaling size 4.142822 MiB 00:10:32.607 size: 1.000366 MiB name: RG_ring_0_113752 00:10:32.607 size: 1.000366 MiB name: RG_ring_1_113752 00:10:32.607 size: 1.000366 MiB name: RG_ring_4_113752 00:10:32.607 size: 1.000366 MiB name: RG_ring_5_113752 00:10:32.607 size: 0.125366 MiB name: RG_ring_2_113752 00:10:32.607 size: 0.015991 MiB name: RG_ring_3_113752 00:10:32.607 end memzones------- 00:10:32.607 00:36:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:32.607 heap id: 0 total size: 820.000000 MiB number of busy elements: 221 number of free elements: 18 00:10:32.607 list of free elements. size: 18.470947 MiB 00:10:32.607 element at address: 0x200000400000 with size: 1.999451 MiB 00:10:32.607 element at address: 0x200000800000 with size: 1.996887 MiB 00:10:32.607 element at address: 0x200007000000 with size: 1.995972 MiB 00:10:32.607 element at address: 0x20000b200000 with size: 1.995972 MiB 00:10:32.607 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:32.607 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:32.607 element at address: 0x200019600000 with size: 0.999329 MiB 00:10:32.607 element at address: 0x200003e00000 with size: 0.996094 MiB 00:10:32.607 element at address: 0x200032200000 with size: 0.994324 MiB 00:10:32.607 element at address: 0x200018e00000 with size: 0.959656 MiB 00:10:32.607 element at address: 0x200019900040 with size: 0.937256 MiB 00:10:32.607 element at address: 0x200000200000 with size: 0.834106 MiB 00:10:32.607 element at address: 0x20001b000000 with size: 0.562439 MiB 00:10:32.607 element at address: 0x200019200000 with size: 0.489197 MiB 00:10:32.607 element at address: 0x200019a00000 with size: 0.485413 MiB 00:10:32.607 element at address: 0x200013800000 with size: 0.469116 MiB 00:10:32.607 element at address: 0x200028400000 with size: 0.399719 MiB 00:10:32.607 element at address: 0x200003a00000 with size: 0.356140 MiB 00:10:32.607 list of standard malloc elements. size: 199.264648 MiB 00:10:32.607 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:10:32.607 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:10:32.607 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:32.607 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:32.607 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:32.607 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:32.607 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:10:32.607 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:32.607 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:10:32.607 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:10:32.607 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:10:32.607 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x200003aff980 with size: 0.000244 MiB 00:10:32.608 element at address: 0x200003affa80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x200003eff000 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x200013878180 with size: 0.000244 MiB 00:10:32.608 element at address: 0x200013878280 with size: 0.000244 MiB 00:10:32.608 element at address: 0x200013878380 with size: 0.000244 MiB 00:10:32.608 element at address: 0x200013878480 with size: 0.000244 MiB 00:10:32.608 element at address: 0x200013878580 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:32.608 element at address: 0x200019abc680 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:10:32.608 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:10:32.609 element at address: 0x200028466540 with size: 0.000244 MiB 00:10:32.609 element at address: 0x200028466640 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846d300 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846d580 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846d680 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846d780 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846d880 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846d980 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846da80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846db80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846de80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846df80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846e080 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846e180 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846e280 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846e380 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846e480 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846e580 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846e680 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846e780 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846e880 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846e980 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846f080 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846f180 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846f280 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846f380 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846f480 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846f580 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846f680 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846f780 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846f880 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846f980 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:10:32.609 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:10:32.609 list of memzone associated elements. size: 602.264404 MiB 00:10:32.609 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:10:32.609 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:32.609 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:10:32.609 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:32.609 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:10:32.609 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_113752_0 00:10:32.609 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:10:32.609 associated memzone info: size: 48.002930 MiB name: MP_evtpool_113752_0 00:10:32.609 element at address: 0x200003fff340 with size: 48.003113 MiB 00:10:32.609 associated memzone info: size: 48.002930 MiB name: MP_msgpool_113752_0 00:10:32.609 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:10:32.609 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:32.609 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:10:32.609 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:32.609 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:10:32.609 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_113752 00:10:32.609 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:10:32.609 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_113752 00:10:32.609 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:32.609 associated memzone info: size: 1.007996 MiB name: MP_evtpool_113752 00:10:32.609 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:32.609 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:32.609 element at address: 0x200019abc780 with size: 1.008179 MiB 00:10:32.609 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:32.609 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:32.609 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:32.609 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:10:32.609 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:32.609 element at address: 0x200003eff100 with size: 1.000549 MiB 00:10:32.609 associated memzone info: size: 1.000366 MiB name: RG_ring_0_113752 00:10:32.609 element at address: 0x200003affb80 with size: 1.000549 MiB 00:10:32.609 associated memzone info: size: 1.000366 MiB name: RG_ring_1_113752 00:10:32.609 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:10:32.609 associated memzone info: size: 1.000366 MiB name: RG_ring_4_113752 00:10:32.609 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:10:32.609 associated memzone info: size: 1.000366 MiB name: RG_ring_5_113752 00:10:32.609 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:10:32.609 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_113752 00:10:32.609 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:10:32.609 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:32.609 element at address: 0x200013878680 with size: 0.500549 MiB 00:10:32.609 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:32.609 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:10:32.609 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:32.609 element at address: 0x200003adf740 with size: 0.125549 MiB 00:10:32.609 associated memzone info: size: 0.125366 MiB name: RG_ring_2_113752 00:10:32.609 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:10:32.609 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:32.609 element at address: 0x200028466740 with size: 0.023804 MiB 00:10:32.609 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:32.609 element at address: 0x200003adb500 with size: 0.016174 MiB 00:10:32.609 associated memzone info: size: 0.015991 MiB name: RG_ring_3_113752 00:10:32.609 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:10:32.609 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:32.609 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:10:32.609 associated memzone info: size: 0.000183 MiB name: MP_msgpool_113752 00:10:32.609 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:10:32.609 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_113752 00:10:32.609 element at address: 0x20002846d400 with size: 0.000366 MiB 00:10:32.609 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:32.609 00:36:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:32.609 00:36:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 113752 00:10:32.609 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 113752 ']' 00:10:32.609 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 113752 00:10:32.609 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:10:32.609 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:32.609 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113752 00:10:32.609 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:32.609 killing process with pid 113752 00:10:32.609 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:32.609 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113752' 00:10:32.609 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 113752 00:10:32.609 00:36:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 113752 00:10:35.898 00:10:35.898 real 0m4.549s 00:10:35.899 user 0m4.358s 00:10:35.899 sys 0m0.773s 00:10:35.899 00:36:57 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.899 00:36:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:35.899 ************************************ 00:10:35.899 END TEST dpdk_mem_utility 00:10:35.899 ************************************ 00:10:35.899 00:36:58 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:35.899 00:36:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:35.899 00:36:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.899 00:36:58 -- common/autotest_common.sh@10 -- # set +x 00:10:35.899 ************************************ 00:10:35.899 START TEST event 00:10:35.899 ************************************ 00:10:35.899 00:36:58 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:35.899 * Looking for test storage... 00:10:35.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:35.899 00:36:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:35.899 00:36:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:35.899 00:36:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:35.899 00:36:58 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:35.899 00:36:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.899 00:36:58 event -- common/autotest_common.sh@10 -- # set +x 00:10:35.899 ************************************ 00:10:35.899 START TEST event_perf 00:10:35.899 ************************************ 00:10:35.899 00:36:58 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:35.899 Running I/O for 1 seconds...[2024-07-25 00:36:58.189376] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:10:35.899 [2024-07-25 00:36:58.189623] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113882 ] 00:10:35.899 [2024-07-25 00:36:58.389394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.158 [2024-07-25 00:36:58.658872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.158 [2024-07-25 00:36:58.658993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.158 [2024-07-25 00:36:58.659154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.158 Running I/O for 1 seconds...[2024-07-25 00:36:58.659159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.535 00:10:37.535 lcore 0: 84732 00:10:37.535 lcore 1: 84721 00:10:37.535 lcore 2: 84725 00:10:37.535 lcore 3: 84728 00:10:37.535 done. 00:10:37.535 00:10:37.535 real 0m2.028s 00:10:37.535 user 0m4.751s 00:10:37.535 sys 0m0.176s 00:10:37.535 00:37:00 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.535 00:37:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:37.535 ************************************ 00:10:37.535 END TEST event_perf 00:10:37.535 ************************************ 00:10:37.795 00:37:00 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:37.795 00:37:00 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:37.795 00:37:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.795 00:37:00 event -- common/autotest_common.sh@10 -- # set +x 00:10:37.795 ************************************ 00:10:37.795 START TEST event_reactor 00:10:37.795 ************************************ 00:10:37.795 00:37:00 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:37.795 [2024-07-25 00:37:00.276672] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:10:37.795 [2024-07-25 00:37:00.276836] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113934 ] 00:10:37.795 [2024-07-25 00:37:00.441036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.362 [2024-07-25 00:37:00.723695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.756 test_start 00:10:39.756 oneshot 00:10:39.756 tick 100 00:10:39.756 tick 100 00:10:39.756 tick 250 00:10:39.756 tick 100 00:10:39.756 tick 100 00:10:39.756 tick 100 00:10:39.756 tick 250 00:10:39.756 tick 500 00:10:39.756 tick 100 00:10:39.756 tick 100 00:10:39.756 tick 250 00:10:39.756 tick 100 00:10:39.756 tick 100 00:10:39.756 test_end 00:10:39.756 00:10:39.756 real 0m2.012s 00:10:39.756 user 0m1.787s 00:10:39.756 sys 0m0.124s 00:10:39.756 ************************************ 00:10:39.756 END TEST event_reactor 00:10:39.756 ************************************ 00:10:39.756 00:37:02 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.756 00:37:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:39.756 00:37:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:39.756 00:37:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:39.756 00:37:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.756 00:37:02 event -- common/autotest_common.sh@10 -- # set +x 00:10:39.756 ************************************ 00:10:39.756 START TEST event_reactor_perf 00:10:39.756 ************************************ 00:10:39.756 00:37:02 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:39.756 [2024-07-25 00:37:02.365432] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:10:39.756 [2024-07-25 00:37:02.365672] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113980 ] 00:10:40.014 [2024-07-25 00:37:02.545487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.273 [2024-07-25 00:37:02.790926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.650 test_start 00:10:41.650 test_end 00:10:41.650 Performance: 407642 events per second 00:10:41.908 00:10:41.908 real 0m1.998s 00:10:41.908 user 0m1.737s 00:10:41.908 sys 0m0.161s 00:10:41.909 00:37:04 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.909 00:37:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:41.909 ************************************ 00:10:41.909 END TEST event_reactor_perf 00:10:41.909 ************************************ 00:10:41.909 00:37:04 event -- event/event.sh@49 -- # uname -s 00:10:41.909 00:37:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:41.909 00:37:04 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:41.909 00:37:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:41.909 00:37:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.909 00:37:04 event -- common/autotest_common.sh@10 -- # set +x 00:10:41.909 ************************************ 00:10:41.909 START TEST event_scheduler 00:10:41.909 ************************************ 00:10:41.909 00:37:04 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:41.909 * Looking for test storage... 00:10:41.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:41.909 00:37:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:41.909 00:37:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=114058 00:10:41.909 00:37:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:41.909 00:37:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 114058 00:10:41.909 00:37:04 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 114058 ']' 00:10:41.909 00:37:04 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.909 00:37:04 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.909 00:37:04 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.909 00:37:04 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.909 00:37:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:41.909 00:37:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:42.167 [2024-07-25 00:37:04.564643] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:10:42.167 [2024-07-25 00:37:04.564815] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114058 ] 00:10:42.167 [2024-07-25 00:37:04.751097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.425 [2024-07-25 00:37:05.041815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.425 [2024-07-25 00:37:05.042156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.425 [2024-07-25 00:37:05.042007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.425 [2024-07-25 00:37:05.042158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.990 00:37:05 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.990 00:37:05 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:10:42.990 00:37:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:42.990 00:37:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.990 00:37:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:42.990 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:42.990 POWER: Cannot set governor of lcore 0 to userspace 00:10:42.990 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:42.990 POWER: Cannot set governor of lcore 0 to performance 00:10:42.990 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:42.990 POWER: Cannot set governor of lcore 0 to userspace 00:10:42.990 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:42.990 POWER: Cannot set governor of lcore 0 to userspace 00:10:42.990 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:42.990 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:42.990 POWER: Unable to set Power Management Environment for lcore 0 00:10:42.990 [2024-07-25 00:37:05.524502] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:10:42.990 [2024-07-25 00:37:05.524586] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:10:42.990 [2024-07-25 00:37:05.524623] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:10:42.990 [2024-07-25 00:37:05.524656] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:42.990 [2024-07-25 00:37:05.524690] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:42.990 [2024-07-25 00:37:05.524716] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:42.990 00:37:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.990 00:37:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:42.990 00:37:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.990 00:37:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:43.248 [2024-07-25 00:37:05.880664] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:43.248 00:37:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.248 00:37:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:43.248 00:37:05 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:43.248 00:37:05 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.248 00:37:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:43.248 ************************************ 00:10:43.248 START TEST scheduler_create_thread 00:10:43.248 ************************************ 00:10:43.248 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:10:43.248 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:43.248 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.248 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 2 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 3 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 4 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 5 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 6 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 7 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 8 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 9 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 10 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.507 00:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.442 00:37:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.442 00:37:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:44.442 00:37:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:44.442 00:37:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.442 00:37:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:45.816 00:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.816 00:10:45.816 real 0m2.151s 00:10:45.816 user 0m0.024s 00:10:45.816 sys 0m0.004s 00:10:45.816 ************************************ 00:10:45.816 END TEST scheduler_create_thread 00:10:45.816 ************************************ 00:10:45.816 00:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:45.816 00:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:45.816 00:37:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:45.816 00:37:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 114058 00:10:45.816 00:37:08 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 114058 ']' 00:10:45.816 00:37:08 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 114058 00:10:45.816 00:37:08 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:10:45.816 00:37:08 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:45.816 00:37:08 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114058 00:10:45.816 00:37:08 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:10:45.816 killing process with pid 114058 00:10:45.816 00:37:08 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:10:45.816 00:37:08 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114058' 00:10:45.816 00:37:08 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 114058 00:10:45.816 00:37:08 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 114058 00:10:46.085 [2024-07-25 00:37:08.526746] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:47.479 00:10:47.479 real 0m5.439s 00:10:47.479 user 0m8.832s 00:10:47.479 sys 0m0.464s 00:10:47.479 00:37:09 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:47.479 00:37:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:47.479 ************************************ 00:10:47.479 END TEST event_scheduler 00:10:47.479 ************************************ 00:10:47.479 00:37:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:47.479 00:37:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:47.479 00:37:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:47.479 00:37:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.479 00:37:09 event -- common/autotest_common.sh@10 -- # set +x 00:10:47.479 ************************************ 00:10:47.479 START TEST app_repeat 00:10:47.479 ************************************ 00:10:47.479 00:37:09 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=114183 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:47.479 Process app_repeat pid: 114183 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 114183' 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:47.479 spdk_app_start Round 0 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:47.479 00:37:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114183 /var/tmp/spdk-nbd.sock 00:10:47.479 00:37:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 114183 ']' 00:10:47.479 00:37:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:47.479 00:37:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:47.479 00:37:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:47.479 00:37:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.479 00:37:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:47.479 [2024-07-25 00:37:09.948837] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:10:47.479 [2024-07-25 00:37:09.949014] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114183 ] 00:10:47.479 [2024-07-25 00:37:10.114466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.046 [2024-07-25 00:37:10.394489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.046 [2024-07-25 00:37:10.394489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.305 00:37:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.305 00:37:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:48.305 00:37:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:48.564 Malloc0 00:10:48.822 00:37:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:49.082 Malloc1 00:10:49.082 00:37:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:49.082 00:37:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:49.082 /dev/nbd0 00:10:49.341 00:37:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:49.341 00:37:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:49.341 1+0 records in 00:10:49.341 1+0 records out 00:10:49.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392548 s, 10.4 MB/s 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:49.341 00:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:49.341 00:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:49.341 00:37:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:49.341 /dev/nbd1 00:10:49.341 00:37:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:49.341 00:37:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:49.341 00:37:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:49.600 1+0 records in 00:10:49.600 1+0 records out 00:10:49.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326685 s, 12.5 MB/s 00:10:49.600 00:37:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:49.600 00:37:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:49.600 00:37:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:49.601 00:37:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:49.601 00:37:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:49.601 00:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:49.601 00:37:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:49.601 00:37:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:49.601 00:37:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.601 00:37:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:49.860 { 00:10:49.860 "nbd_device": "/dev/nbd0", 00:10:49.860 "bdev_name": "Malloc0" 00:10:49.860 }, 00:10:49.860 { 00:10:49.860 "nbd_device": "/dev/nbd1", 00:10:49.860 "bdev_name": "Malloc1" 00:10:49.860 } 00:10:49.860 ]' 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:49.860 { 00:10:49.860 "nbd_device": "/dev/nbd0", 00:10:49.860 "bdev_name": "Malloc0" 00:10:49.860 }, 00:10:49.860 { 00:10:49.860 "nbd_device": "/dev/nbd1", 00:10:49.860 "bdev_name": "Malloc1" 00:10:49.860 } 00:10:49.860 ]' 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:49.860 /dev/nbd1' 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:49.860 /dev/nbd1' 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:49.860 256+0 records in 00:10:49.860 256+0 records out 00:10:49.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00615299 s, 170 MB/s 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:49.860 256+0 records in 00:10:49.860 256+0 records out 00:10:49.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265927 s, 39.4 MB/s 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:49.860 256+0 records in 00:10:49.860 256+0 records out 00:10:49.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307104 s, 34.1 MB/s 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.860 00:37:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:50.119 00:37:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:50.119 00:37:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:50.119 00:37:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:50.119 00:37:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.119 00:37:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.119 00:37:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:50.119 00:37:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:50.119 00:37:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.119 00:37:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:50.119 00:37:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:50.378 00:37:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:50.378 00:37:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:50.378 00:37:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:50.378 00:37:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.378 00:37:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.378 00:37:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:50.378 00:37:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:50.378 00:37:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.378 00:37:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:50.378 00:37:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.378 00:37:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:50.637 00:37:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:50.637 00:37:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:50.637 00:37:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:50.637 00:37:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:50.898 00:37:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:50.898 00:37:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:50.898 00:37:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:50.898 00:37:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:50.898 00:37:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:50.898 00:37:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:50.898 00:37:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:50.898 00:37:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:50.898 00:37:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:51.465 00:37:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:53.370 [2024-07-25 00:37:15.520270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:53.370 [2024-07-25 00:37:15.787093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.370 [2024-07-25 00:37:15.787094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.629 [2024-07-25 00:37:16.040955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:53.629 [2024-07-25 00:37:16.041083] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:54.249 00:37:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:54.249 spdk_app_start Round 1 00:10:54.249 00:37:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:54.249 00:37:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114183 /var/tmp/spdk-nbd.sock 00:10:54.249 00:37:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 114183 ']' 00:10:54.249 00:37:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:54.249 00:37:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:54.249 00:37:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:54.249 00:37:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.249 00:37:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:54.508 00:37:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.508 00:37:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:54.508 00:37:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:55.075 Malloc0 00:10:55.075 00:37:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:55.334 Malloc1 00:10:55.334 00:37:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:55.334 00:37:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:55.593 /dev/nbd0 00:10:55.593 00:37:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:55.593 00:37:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:55.593 1+0 records in 00:10:55.593 1+0 records out 00:10:55.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184368 s, 22.2 MB/s 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:55.593 00:37:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:55.593 00:37:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:55.593 00:37:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:55.593 00:37:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:55.852 /dev/nbd1 00:10:55.852 00:37:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:55.852 00:37:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:55.852 1+0 records in 00:10:55.852 1+0 records out 00:10:55.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258321 s, 15.9 MB/s 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:55.852 00:37:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:55.852 00:37:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:55.852 00:37:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:55.852 00:37:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:55.852 00:37:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.852 00:37:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:56.111 { 00:10:56.111 "nbd_device": "/dev/nbd0", 00:10:56.111 "bdev_name": "Malloc0" 00:10:56.111 }, 00:10:56.111 { 00:10:56.111 "nbd_device": "/dev/nbd1", 00:10:56.111 "bdev_name": "Malloc1" 00:10:56.111 } 00:10:56.111 ]' 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:56.111 { 00:10:56.111 "nbd_device": "/dev/nbd0", 00:10:56.111 "bdev_name": "Malloc0" 00:10:56.111 }, 00:10:56.111 { 00:10:56.111 "nbd_device": "/dev/nbd1", 00:10:56.111 "bdev_name": "Malloc1" 00:10:56.111 } 00:10:56.111 ]' 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:56.111 /dev/nbd1' 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:56.111 /dev/nbd1' 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:56.111 256+0 records in 00:10:56.111 256+0 records out 00:10:56.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00694275 s, 151 MB/s 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:56.111 256+0 records in 00:10:56.111 256+0 records out 00:10:56.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289797 s, 36.2 MB/s 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:56.111 256+0 records in 00:10:56.111 256+0 records out 00:10:56.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295482 s, 35.5 MB/s 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:56.111 00:37:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:56.112 00:37:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:56.112 00:37:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:56.112 00:37:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.112 00:37:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:56.112 00:37:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:56.112 00:37:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:56.112 00:37:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:56.112 00:37:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:56.371 00:37:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:56.371 00:37:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:56.371 00:37:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:56.371 00:37:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.371 00:37:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.371 00:37:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:56.371 00:37:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:56.371 00:37:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.371 00:37:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:56.371 00:37:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:56.630 00:37:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:56.630 00:37:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:56.630 00:37:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:56.630 00:37:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.630 00:37:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.630 00:37:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:56.630 00:37:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:56.630 00:37:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.630 00:37:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:56.630 00:37:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.630 00:37:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:56.889 00:37:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:57.148 00:37:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:57.148 00:37:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:57.148 00:37:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:57.148 00:37:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:57.148 00:37:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:57.148 00:37:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:57.148 00:37:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:57.148 00:37:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:57.148 00:37:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:57.148 00:37:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:57.148 00:37:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:57.148 00:37:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:57.717 00:37:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:59.621 [2024-07-25 00:37:21.806384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:59.621 [2024-07-25 00:37:22.071475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.621 [2024-07-25 00:37:22.071476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.881 [2024-07-25 00:37:22.317570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:59.881 [2024-07-25 00:37:22.317771] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:00.818 00:37:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:00.818 spdk_app_start Round 2 00:11:00.818 00:37:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:00.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:00.818 00:37:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114183 /var/tmp/spdk-nbd.sock 00:11:00.818 00:37:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 114183 ']' 00:11:00.818 00:37:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:00.818 00:37:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.818 00:37:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:00.818 00:37:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.818 00:37:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:00.818 00:37:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.818 00:37:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:11:00.818 00:37:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:01.076 Malloc0 00:11:01.076 00:37:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:01.335 Malloc1 00:11:01.335 00:37:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:01.335 00:37:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:01.594 /dev/nbd0 00:11:01.594 00:37:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:01.594 00:37:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:01.594 1+0 records in 00:11:01.594 1+0 records out 00:11:01.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281894 s, 14.5 MB/s 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:01.594 00:37:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:11:01.594 00:37:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:01.594 00:37:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:01.594 00:37:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:01.853 /dev/nbd1 00:11:01.853 00:37:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:01.853 00:37:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:01.853 1+0 records in 00:11:01.853 1+0 records out 00:11:01.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689435 s, 5.9 MB/s 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:01.853 00:37:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:11:01.853 00:37:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:01.853 00:37:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:01.853 00:37:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:01.853 00:37:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.853 00:37:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:02.112 { 00:11:02.112 "nbd_device": "/dev/nbd0", 00:11:02.112 "bdev_name": "Malloc0" 00:11:02.112 }, 00:11:02.112 { 00:11:02.112 "nbd_device": "/dev/nbd1", 00:11:02.112 "bdev_name": "Malloc1" 00:11:02.112 } 00:11:02.112 ]' 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:02.112 { 00:11:02.112 "nbd_device": "/dev/nbd0", 00:11:02.112 "bdev_name": "Malloc0" 00:11:02.112 }, 00:11:02.112 { 00:11:02.112 "nbd_device": "/dev/nbd1", 00:11:02.112 "bdev_name": "Malloc1" 00:11:02.112 } 00:11:02.112 ]' 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:02.112 /dev/nbd1' 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:02.112 /dev/nbd1' 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:02.112 256+0 records in 00:11:02.112 256+0 records out 00:11:02.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656906 s, 160 MB/s 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:02.112 256+0 records in 00:11:02.112 256+0 records out 00:11:02.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287188 s, 36.5 MB/s 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:02.112 00:37:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:02.370 256+0 records in 00:11:02.370 256+0 records out 00:11:02.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0362805 s, 28.9 MB/s 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.370 00:37:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:02.629 00:37:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:02.629 00:37:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:02.629 00:37:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:02.629 00:37:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.629 00:37:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.629 00:37:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:02.629 00:37:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:02.629 00:37:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.629 00:37:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.629 00:37:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:02.888 00:37:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:02.888 00:37:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:02.888 00:37:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:02.888 00:37:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.888 00:37:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.888 00:37:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:02.888 00:37:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:02.888 00:37:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.888 00:37:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:02.888 00:37:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:02.888 00:37:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:03.147 00:37:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:03.147 00:37:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:03.714 00:37:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:05.616 [2024-07-25 00:37:27.822461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:05.616 [2024-07-25 00:37:28.082411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.616 [2024-07-25 00:37:28.082411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.876 [2024-07-25 00:37:28.324124] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:05.876 [2024-07-25 00:37:28.324287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:06.813 00:37:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 114183 /var/tmp/spdk-nbd.sock 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 114183 ']' 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:11:06.813 00:37:29 event.app_repeat -- event/event.sh@39 -- # killprocess 114183 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 114183 ']' 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 114183 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114183 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114183' 00:11:06.813 killing process with pid 114183 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@967 -- # kill 114183 00:11:06.813 00:37:29 event.app_repeat -- common/autotest_common.sh@972 -- # wait 114183 00:11:08.717 spdk_app_start is called in Round 0. 00:11:08.717 Shutdown signal received, stop current app iteration 00:11:08.717 Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 reinitialization... 00:11:08.717 spdk_app_start is called in Round 1. 00:11:08.717 Shutdown signal received, stop current app iteration 00:11:08.717 Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 reinitialization... 00:11:08.717 spdk_app_start is called in Round 2. 00:11:08.717 Shutdown signal received, stop current app iteration 00:11:08.717 Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 reinitialization... 00:11:08.717 spdk_app_start is called in Round 3. 00:11:08.717 Shutdown signal received, stop current app iteration 00:11:08.717 ************************************ 00:11:08.717 END TEST app_repeat 00:11:08.717 ************************************ 00:11:08.717 00:37:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:08.717 00:37:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:08.717 00:11:08.717 real 0m21.033s 00:11:08.717 user 0m43.243s 00:11:08.717 sys 0m3.493s 00:11:08.717 00:37:30 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.717 00:37:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:08.717 00:37:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:08.717 00:37:30 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:08.717 00:37:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:08.717 00:37:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.717 00:37:30 event -- common/autotest_common.sh@10 -- # set +x 00:11:08.717 ************************************ 00:11:08.717 START TEST cpu_locks 00:11:08.717 ************************************ 00:11:08.717 00:37:30 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:08.717 * Looking for test storage... 00:11:08.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:08.717 00:37:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:08.717 00:37:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:08.717 00:37:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:08.717 00:37:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:08.717 00:37:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:08.717 00:37:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.717 00:37:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:08.717 ************************************ 00:11:08.717 START TEST default_locks 00:11:08.717 ************************************ 00:11:08.717 00:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:11:08.717 00:37:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=114725 00:11:08.717 00:37:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 114725 00:11:08.717 00:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 114725 ']' 00:11:08.717 00:37:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:08.717 00:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.717 00:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.717 00:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.717 00:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.717 00:37:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:08.718 [2024-07-25 00:37:31.224101] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:08.718 [2024-07-25 00:37:31.224604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114725 ] 00:11:08.976 [2024-07-25 00:37:31.402608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.235 [2024-07-25 00:37:31.660472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.171 00:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.171 00:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:11:10.171 00:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 114725 00:11:10.171 00:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 114725 00:11:10.171 00:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:10.430 00:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 114725 00:11:10.430 00:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 114725 ']' 00:11:10.430 00:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 114725 00:11:10.430 00:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:11:10.430 00:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.430 00:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114725 00:11:10.430 00:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:10.430 00:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:10.430 killing process with pid 114725 00:11:10.430 00:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114725' 00:11:10.430 00:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 114725 00:11:10.430 00:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 114725 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 114725 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 114725 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 114725 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 114725 ']' 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:13.715 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (114725) - No such process 00:11:13.715 ERROR: process (pid: 114725) is no longer running 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:13.715 00:11:13.715 real 0m4.650s 00:11:13.715 user 0m4.576s 00:11:13.715 sys 0m0.826s 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:13.715 00:37:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:13.715 ************************************ 00:11:13.715 END TEST default_locks 00:11:13.715 ************************************ 00:11:13.715 00:37:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:13.715 00:37:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:13.715 00:37:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.715 00:37:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:13.715 ************************************ 00:11:13.715 START TEST default_locks_via_rpc 00:11:13.715 ************************************ 00:11:13.715 00:37:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:11:13.715 00:37:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=114809 00:11:13.715 00:37:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 114809 00:11:13.715 00:37:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114809 ']' 00:11:13.715 00:37:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.716 00:37:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.716 00:37:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.716 00:37:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.716 00:37:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.716 00:37:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:13.716 [2024-07-25 00:37:35.933064] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:13.716 [2024-07-25 00:37:35.933543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114809 ] 00:11:13.716 [2024-07-25 00:37:36.116419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.975 [2024-07-25 00:37:36.375047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 114809 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 114809 00:11:14.912 00:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:15.172 00:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 114809 00:11:15.172 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 114809 ']' 00:11:15.172 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 114809 00:11:15.172 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:11:15.172 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:15.172 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114809 00:11:15.172 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:15.172 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:15.172 killing process with pid 114809 00:11:15.172 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114809' 00:11:15.172 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 114809 00:11:15.172 00:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 114809 00:11:18.462 00:11:18.462 real 0m4.591s 00:11:18.462 user 0m4.309s 00:11:18.462 sys 0m0.853s 00:11:18.462 00:37:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.462 ************************************ 00:11:18.462 END TEST default_locks_via_rpc 00:11:18.462 ************************************ 00:11:18.462 00:37:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.462 00:37:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:18.462 00:37:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:18.462 00:37:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.462 00:37:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:18.462 ************************************ 00:11:18.462 START TEST non_locking_app_on_locked_coremask 00:11:18.462 ************************************ 00:11:18.462 00:37:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:11:18.462 00:37:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=114903 00:11:18.462 00:37:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 114903 /var/tmp/spdk.sock 00:11:18.462 00:37:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114903 ']' 00:11:18.462 00:37:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:18.462 00:37:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.462 00:37:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.462 00:37:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.462 00:37:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.462 00:37:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:18.462 [2024-07-25 00:37:40.596511] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:18.462 [2024-07-25 00:37:40.596744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114903 ] 00:11:18.462 [2024-07-25 00:37:40.776490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.462 [2024-07-25 00:37:41.035472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.400 00:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.400 00:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:19.400 00:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=114924 00:11:19.400 00:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:19.400 00:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 114924 /var/tmp/spdk2.sock 00:11:19.400 00:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114924 ']' 00:11:19.400 00:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:19.400 00:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:19.400 00:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:19.400 00:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.400 00:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:19.659 [2024-07-25 00:37:42.066862] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:19.659 [2024-07-25 00:37:42.067080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114924 ] 00:11:19.659 [2024-07-25 00:37:42.239217] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:19.659 [2024-07-25 00:37:42.239319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.227 [2024-07-25 00:37:42.762845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.130 00:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.130 00:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:22.130 00:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 114903 00:11:22.130 00:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114903 00:11:22.130 00:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:22.698 00:37:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 114903 00:11:22.698 00:37:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 114903 ']' 00:11:22.698 00:37:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 114903 00:11:22.698 00:37:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:22.698 00:37:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.698 00:37:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114903 00:11:22.698 00:37:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:22.698 killing process with pid 114903 00:11:22.698 00:37:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:22.698 00:37:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114903' 00:11:22.698 00:37:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 114903 00:11:22.698 00:37:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 114903 00:11:29.335 00:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 114924 00:11:29.335 00:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 114924 ']' 00:11:29.335 00:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 114924 00:11:29.335 00:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:29.335 00:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.335 00:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114924 00:11:29.335 00:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:29.335 killing process with pid 114924 00:11:29.335 00:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:29.335 00:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114924' 00:11:29.335 00:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 114924 00:11:29.335 00:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 114924 00:11:31.284 00:11:31.284 real 0m13.016s 00:11:31.284 user 0m13.156s 00:11:31.284 sys 0m1.763s 00:11:31.284 00:37:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:31.284 00:37:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:31.284 ************************************ 00:11:31.284 END TEST non_locking_app_on_locked_coremask 00:11:31.284 ************************************ 00:11:31.284 00:37:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:31.284 00:37:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:31.284 00:37:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.284 00:37:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:31.284 ************************************ 00:11:31.284 START TEST locking_app_on_unlocked_coremask 00:11:31.284 ************************************ 00:11:31.284 00:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:11:31.284 00:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=115105 00:11:31.284 00:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 115105 /var/tmp/spdk.sock 00:11:31.284 00:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 115105 ']' 00:11:31.284 00:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.284 00:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:31.284 00:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.284 00:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.284 00:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.284 00:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:31.284 [2024-07-25 00:37:53.680758] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:31.284 [2024-07-25 00:37:53.680975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115105 ] 00:11:31.284 [2024-07-25 00:37:53.844501] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:31.284 [2024-07-25 00:37:53.844610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.544 [2024-07-25 00:37:54.104980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.481 00:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.481 00:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:32.481 00:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:32.481 00:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=115126 00:11:32.481 00:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 115126 /var/tmp/spdk2.sock 00:11:32.481 00:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 115126 ']' 00:11:32.481 00:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:32.481 00:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:32.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:32.481 00:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:32.482 00:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:32.482 00:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:32.482 [2024-07-25 00:37:55.112342] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:32.482 [2024-07-25 00:37:55.112531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115126 ] 00:11:32.741 [2024-07-25 00:37:55.270810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.309 [2024-07-25 00:37:55.800271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.213 00:37:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.213 00:37:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:35.213 00:37:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 115126 00:11:35.213 00:37:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 115126 00:11:35.213 00:37:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:35.780 00:37:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 115105 00:11:35.780 00:37:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 115105 ']' 00:11:35.780 00:37:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 115105 00:11:35.780 00:37:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:35.780 00:37:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:35.780 00:37:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115105 00:11:35.780 00:37:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:35.780 killing process with pid 115105 00:11:35.780 00:37:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:35.780 00:37:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115105' 00:11:35.780 00:37:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 115105 00:11:35.780 00:37:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 115105 00:11:42.358 00:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 115126 00:11:42.358 00:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 115126 ']' 00:11:42.358 00:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 115126 00:11:42.358 00:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:42.358 00:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:42.358 00:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115126 00:11:42.358 00:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:42.358 00:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:42.358 killing process with pid 115126 00:11:42.358 00:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115126' 00:11:42.358 00:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 115126 00:11:42.358 00:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 115126 00:11:44.264 00:11:44.264 real 0m13.044s 00:11:44.264 user 0m13.179s 00:11:44.264 sys 0m1.740s 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.264 ************************************ 00:11:44.264 END TEST locking_app_on_unlocked_coremask 00:11:44.264 ************************************ 00:11:44.264 00:38:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:44.264 00:38:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:44.264 00:38:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.264 00:38:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:44.264 ************************************ 00:11:44.264 START TEST locking_app_on_locked_coremask 00:11:44.264 ************************************ 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=115297 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 115297 /var/tmp/spdk.sock 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 115297 ']' 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.264 00:38:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:44.264 [2024-07-25 00:38:06.799243] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:44.264 [2024-07-25 00:38:06.799693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115297 ] 00:11:44.523 [2024-07-25 00:38:06.981003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.782 [2024-07-25 00:38:07.245240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.718 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=115328 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 115328 /var/tmp/spdk2.sock 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 115328 /var/tmp/spdk2.sock 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 115328 /var/tmp/spdk2.sock 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 115328 ']' 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:45.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:45.719 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:45.719 [2024-07-25 00:38:08.317063] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:45.719 [2024-07-25 00:38:08.317409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115328 ] 00:11:46.002 [2024-07-25 00:38:08.509598] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 115297 has claimed it. 00:11:46.002 [2024-07-25 00:38:08.509719] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:46.569 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (115328) - No such process 00:11:46.569 ERROR: process (pid: 115328) is no longer running 00:11:46.569 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:46.569 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:46.569 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:46.569 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:46.569 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:46.569 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:46.569 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 115297 00:11:46.569 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 115297 00:11:46.569 00:38:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:46.829 00:38:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 115297 00:11:46.829 00:38:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 115297 ']' 00:11:46.829 00:38:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 115297 00:11:46.829 00:38:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:46.829 00:38:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.829 00:38:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115297 00:11:46.829 00:38:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:46.829 killing process with pid 115297 00:11:46.829 00:38:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:46.829 00:38:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115297' 00:11:46.829 00:38:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 115297 00:11:46.829 00:38:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 115297 00:11:50.112 00:11:50.112 real 0m5.391s 00:11:50.112 user 0m5.489s 00:11:50.112 sys 0m1.039s 00:11:50.112 00:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.112 00:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:50.112 ************************************ 00:11:50.112 END TEST locking_app_on_locked_coremask 00:11:50.112 ************************************ 00:11:50.112 00:38:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:50.112 00:38:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:50.112 00:38:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.113 00:38:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:50.113 ************************************ 00:11:50.113 START TEST locking_overlapped_coremask 00:11:50.113 ************************************ 00:11:50.113 00:38:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:11:50.113 00:38:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=115406 00:11:50.113 00:38:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 115406 /var/tmp/spdk.sock 00:11:50.113 00:38:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 115406 ']' 00:11:50.113 00:38:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.113 00:38:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:50.113 00:38:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.113 00:38:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.113 00:38:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.113 00:38:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:50.113 [2024-07-25 00:38:12.283087] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:50.113 [2024-07-25 00:38:12.283344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115406 ] 00:11:50.113 [2024-07-25 00:38:12.475110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:50.113 [2024-07-25 00:38:12.735694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.113 [2024-07-25 00:38:12.735873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.113 [2024-07-25 00:38:12.735878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=115434 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 115434 /var/tmp/spdk2.sock 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 115434 /var/tmp/spdk2.sock 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 115434 /var/tmp/spdk2.sock 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 115434 ']' 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:51.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.487 00:38:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:51.487 [2024-07-25 00:38:13.799666] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:51.487 [2024-07-25 00:38:13.800152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115434 ] 00:11:51.487 [2024-07-25 00:38:13.996920] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 115406 has claimed it. 00:11:51.487 [2024-07-25 00:38:13.997028] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:52.054 ERROR: process (pid: 115434) is no longer running 00:11:52.054 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (115434) - No such process 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 115406 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 115406 ']' 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 115406 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115406 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:52.054 killing process with pid 115406 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115406' 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 115406 00:11:52.054 00:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 115406 00:11:54.625 00:11:54.625 real 0m5.092s 00:11:54.625 user 0m13.075s 00:11:54.625 sys 0m0.869s 00:11:54.625 00:38:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.625 00:38:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:54.625 ************************************ 00:11:54.625 END TEST locking_overlapped_coremask 00:11:54.625 ************************************ 00:11:54.885 00:38:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:54.885 00:38:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:54.885 00:38:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.885 00:38:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:54.885 ************************************ 00:11:54.885 START TEST locking_overlapped_coremask_via_rpc 00:11:54.885 ************************************ 00:11:54.885 00:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:11:54.885 00:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=115510 00:11:54.885 00:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 115510 /var/tmp/spdk.sock 00:11:54.885 00:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:54.885 00:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 115510 ']' 00:11:54.885 00:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.885 00:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.885 00:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.885 00:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.885 00:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.885 [2024-07-25 00:38:17.419685] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:54.885 [2024-07-25 00:38:17.420267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115510 ] 00:11:55.144 [2024-07-25 00:38:17.590054] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:55.144 [2024-07-25 00:38:17.590161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.403 [2024-07-25 00:38:17.852034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.403 [2024-07-25 00:38:17.852203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.403 [2024-07-25 00:38:17.852209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.338 00:38:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.338 00:38:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:56.339 00:38:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=115533 00:11:56.339 00:38:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:56.339 00:38:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 115533 /var/tmp/spdk2.sock 00:11:56.339 00:38:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 115533 ']' 00:11:56.339 00:38:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:56.339 00:38:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:56.339 00:38:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:56.339 00:38:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.339 00:38:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.339 [2024-07-25 00:38:18.914059] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:11:56.339 [2024-07-25 00:38:18.914324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115533 ] 00:11:56.597 [2024-07-25 00:38:19.115917] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:56.597 [2024-07-25 00:38:19.116022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:57.166 [2024-07-25 00:38:19.635025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.166 [2024-07-25 00:38:19.650329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:57.166 [2024-07-25 00:38:19.650331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:59.070 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.071 [2024-07-25 00:38:21.642492] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 115510 has claimed it. 00:11:59.071 request: 00:11:59.071 { 00:11:59.071 "method": "framework_enable_cpumask_locks", 00:11:59.071 "req_id": 1 00:11:59.071 } 00:11:59.071 Got JSON-RPC error response 00:11:59.071 response: 00:11:59.071 { 00:11:59.071 "code": -32603, 00:11:59.071 "message": "Failed to claim CPU core: 2" 00:11:59.071 } 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 115510 /var/tmp/spdk.sock 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 115510 ']' 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.071 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.331 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.331 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:59.331 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 115533 /var/tmp/spdk2.sock 00:11:59.331 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 115533 ']' 00:11:59.331 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:59.331 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.331 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:59.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:59.331 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.331 00:38:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.591 00:38:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.591 00:38:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:59.591 00:38:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:59.591 00:38:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:59.591 00:38:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:59.591 00:38:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:59.591 00:11:59.591 real 0m4.739s 00:11:59.591 user 0m1.521s 00:11:59.591 sys 0m0.271s 00:11:59.591 00:38:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:59.591 00:38:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.591 ************************************ 00:11:59.591 END TEST locking_overlapped_coremask_via_rpc 00:11:59.591 ************************************ 00:11:59.591 00:38:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:59.591 00:38:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 115510 ]] 00:11:59.591 00:38:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 115510 00:11:59.591 00:38:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 115510 ']' 00:11:59.591 00:38:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 115510 00:11:59.591 00:38:22 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:59.591 00:38:22 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:59.591 00:38:22 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115510 00:11:59.591 00:38:22 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:59.591 00:38:22 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:59.591 killing process with pid 115510 00:11:59.591 00:38:22 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115510' 00:11:59.591 00:38:22 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 115510 00:11:59.591 00:38:22 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 115510 00:12:02.876 00:38:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 115533 ]] 00:12:02.876 00:38:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 115533 00:12:02.876 00:38:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 115533 ']' 00:12:02.876 00:38:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 115533 00:12:02.876 00:38:24 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:12:02.876 00:38:24 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:02.876 00:38:24 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115533 00:12:02.876 00:38:24 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:02.876 00:38:24 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:02.876 00:38:24 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115533' 00:12:02.876 killing process with pid 115533 00:12:02.876 00:38:24 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 115533 00:12:02.876 00:38:24 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 115533 00:12:05.405 00:38:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:05.405 00:38:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:05.405 00:38:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 115510 ]] 00:12:05.405 00:38:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 115510 00:12:05.405 00:38:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 115510 ']' 00:12:05.405 00:38:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 115510 00:12:05.405 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (115510) - No such process 00:12:05.405 00:38:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 115510 is not found' 00:12:05.405 Process with pid 115510 is not found 00:12:05.405 00:38:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 115533 ]] 00:12:05.405 00:38:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 115533 00:12:05.405 00:38:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 115533 ']' 00:12:05.405 00:38:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 115533 00:12:05.405 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (115533) - No such process 00:12:05.405 00:38:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 115533 is not found' 00:12:05.405 Process with pid 115533 is not found 00:12:05.405 00:38:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:05.405 00:12:05.406 real 0m56.778s 00:12:05.406 user 1m33.800s 00:12:05.406 sys 0m8.820s 00:12:05.406 00:38:27 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.406 00:38:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:05.406 ************************************ 00:12:05.406 END TEST cpu_locks 00:12:05.406 ************************************ 00:12:05.406 00:12:05.406 real 1m29.815s 00:12:05.406 user 2m34.422s 00:12:05.406 sys 0m13.507s 00:12:05.406 00:38:27 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.406 00:38:27 event -- common/autotest_common.sh@10 -- # set +x 00:12:05.406 ************************************ 00:12:05.406 END TEST event 00:12:05.406 ************************************ 00:12:05.406 00:38:27 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:05.406 00:38:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:05.406 00:38:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.406 00:38:27 -- common/autotest_common.sh@10 -- # set +x 00:12:05.406 ************************************ 00:12:05.406 START TEST thread 00:12:05.406 ************************************ 00:12:05.406 00:38:27 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:05.406 * Looking for test storage... 00:12:05.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:05.406 00:38:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:05.406 00:38:28 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:12:05.406 00:38:28 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.406 00:38:28 thread -- common/autotest_common.sh@10 -- # set +x 00:12:05.406 ************************************ 00:12:05.406 START TEST thread_poller_perf 00:12:05.406 ************************************ 00:12:05.406 00:38:28 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:05.665 [2024-07-25 00:38:28.069878] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:05.665 [2024-07-25 00:38:28.070192] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115751 ] 00:12:05.665 [2024-07-25 00:38:28.243744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.923 [2024-07-25 00:38:28.504361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.923 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:07.826 ====================================== 00:12:07.826 busy:2110942016 (cyc) 00:12:07.826 total_run_count: 382000 00:12:07.826 tsc_hz: 2100000000 (cyc) 00:12:07.826 ====================================== 00:12:07.826 poller_cost: 5526 (cyc), 2631 (nsec) 00:12:07.826 00:12:07.826 real 0m1.987s 00:12:07.826 user 0m1.706s 00:12:07.826 sys 0m0.180s 00:12:07.826 00:38:30 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.826 00:38:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:07.826 ************************************ 00:12:07.826 END TEST thread_poller_perf 00:12:07.826 ************************************ 00:12:07.826 00:38:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:07.826 00:38:30 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:12:07.826 00:38:30 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.826 00:38:30 thread -- common/autotest_common.sh@10 -- # set +x 00:12:07.826 ************************************ 00:12:07.826 START TEST thread_poller_perf 00:12:07.826 ************************************ 00:12:07.826 00:38:30 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:07.827 [2024-07-25 00:38:30.128186] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:07.827 [2024-07-25 00:38:30.128457] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115797 ] 00:12:07.827 [2024-07-25 00:38:30.321155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.085 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:08.085 [2024-07-25 00:38:30.627669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.479 ====================================== 00:12:09.479 busy:2103537834 (cyc) 00:12:09.479 total_run_count: 4986000 00:12:09.479 tsc_hz: 2100000000 (cyc) 00:12:09.479 ====================================== 00:12:09.479 poller_cost: 421 (cyc), 200 (nsec) 00:12:09.479 00:12:09.479 real 0m2.044s 00:12:09.479 user 0m1.772s 00:12:09.479 sys 0m0.172s 00:12:09.479 00:38:32 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.479 00:38:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:09.479 ************************************ 00:12:09.479 END TEST thread_poller_perf 00:12:09.479 ************************************ 00:12:09.739 00:38:32 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:12:09.739 00:38:32 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:09.739 00:38:32 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:09.739 00:38:32 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.739 00:38:32 thread -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 ************************************ 00:12:09.739 START TEST thread_spdk_lock 00:12:09.739 ************************************ 00:12:09.739 00:38:32 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:09.739 [2024-07-25 00:38:32.240220] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:09.739 [2024-07-25 00:38:32.240469] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115845 ] 00:12:09.998 [2024-07-25 00:38:32.425025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:10.257 [2024-07-25 00:38:32.686854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.257 [2024-07-25 00:38:32.686855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.824 [2024-07-25 00:38:33.342028] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:10.824 [2024-07-25 00:38:33.342187] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:12:10.824 [2024-07-25 00:38:33.342244] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x560480de54c0 00:12:10.824 [2024-07-25 00:38:33.353957] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:10.824 [2024-07-25 00:38:33.354078] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:10.824 [2024-07-25 00:38:33.354121] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:11.392 Starting test contend 00:12:11.392 Worker Delay Wait us Hold us Total us 00:12:11.392 0 3 42672 229487 272160 00:12:11.392 1 5 48842 314364 363207 00:12:11.392 PASS test contend 00:12:11.392 Starting test hold_by_poller 00:12:11.392 PASS test hold_by_poller 00:12:11.392 Starting test hold_by_message 00:12:11.392 PASS test hold_by_message 00:12:11.392 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:12:11.392 100014 assertions passed 00:12:11.392 0 assertions failed 00:12:11.392 00:12:11.392 real 0m1.692s 00:12:11.393 user 0m2.085s 00:12:11.393 sys 0m0.173s 00:12:11.393 00:38:33 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.393 00:38:33 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:12:11.393 ************************************ 00:12:11.393 END TEST thread_spdk_lock 00:12:11.393 ************************************ 00:12:11.393 00:12:11.393 real 0m6.040s 00:12:11.393 user 0m5.721s 00:12:11.393 sys 0m0.698s 00:12:11.393 00:38:33 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.393 00:38:33 thread -- common/autotest_common.sh@10 -- # set +x 00:12:11.393 ************************************ 00:12:11.393 END TEST thread 00:12:11.393 ************************************ 00:12:11.393 00:38:33 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:11.393 00:38:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:11.393 00:38:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.393 00:38:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.393 ************************************ 00:12:11.393 START TEST accel 00:12:11.393 ************************************ 00:12:11.393 00:38:33 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:11.652 * Looking for test storage... 00:12:11.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:11.652 00:38:34 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:11.652 00:38:34 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:12:11.652 00:38:34 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:11.652 00:38:34 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=115937 00:12:11.652 00:38:34 accel -- accel/accel.sh@63 -- # waitforlisten 115937 00:12:11.652 00:38:34 accel -- common/autotest_common.sh@829 -- # '[' -z 115937 ']' 00:12:11.652 00:38:34 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.652 00:38:34 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.652 00:38:34 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.652 00:38:34 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.652 00:38:34 accel -- common/autotest_common.sh@10 -- # set +x 00:12:11.652 00:38:34 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:11.652 00:38:34 accel -- accel/accel.sh@61 -- # build_accel_config 00:12:11.652 00:38:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:11.652 00:38:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:11.652 00:38:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:11.652 00:38:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:11.652 00:38:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:11.652 00:38:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:11.652 00:38:34 accel -- accel/accel.sh@41 -- # jq -r . 00:12:11.652 [2024-07-25 00:38:34.197593] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:11.652 [2024-07-25 00:38:34.197820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115937 ] 00:12:11.911 [2024-07-25 00:38:34.373973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.169 [2024-07-25 00:38:34.635208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.107 00:38:35 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:13.107 00:38:35 accel -- common/autotest_common.sh@862 -- # return 0 00:12:13.107 00:38:35 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:13.107 00:38:35 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:13.107 00:38:35 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:13.107 00:38:35 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:13.108 00:38:35 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:13.108 00:38:35 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@10 -- # set +x 00:12:13.108 00:38:35 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # IFS== 00:12:13.108 00:38:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:13.108 00:38:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.108 00:38:35 accel -- accel/accel.sh@75 -- # killprocess 115937 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@948 -- # '[' -z 115937 ']' 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@952 -- # kill -0 115937 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@953 -- # uname 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115937 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:13.108 killing process with pid 115937 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115937' 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@967 -- # kill 115937 00:12:13.108 00:38:35 accel -- common/autotest_common.sh@972 -- # wait 115937 00:12:16.393 00:38:38 accel -- accel/accel.sh@76 -- # trap - ERR 00:12:16.393 00:38:38 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:16.393 00:38:38 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:16.393 00:38:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.393 00:38:38 accel -- common/autotest_common.sh@10 -- # set +x 00:12:16.393 00:38:38 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:12:16.393 00:38:38 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:16.393 00:38:38 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:12:16.393 00:38:38 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:16.393 00:38:38 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:16.393 00:38:38 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:16.393 00:38:38 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:16.393 00:38:38 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:16.393 00:38:38 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:12:16.393 00:38:38 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:12:16.393 00:38:38 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.393 00:38:38 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:12:16.393 00:38:38 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:16.393 00:38:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:16.393 00:38:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.393 00:38:38 accel -- common/autotest_common.sh@10 -- # set +x 00:12:16.393 ************************************ 00:12:16.393 START TEST accel_missing_filename 00:12:16.393 ************************************ 00:12:16.393 00:38:38 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:12:16.393 00:38:38 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:12:16.393 00:38:38 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:16.393 00:38:38 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:16.393 00:38:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.393 00:38:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:16.394 00:38:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.394 00:38:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:12:16.394 00:38:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:16.394 00:38:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:12:16.394 00:38:38 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:16.394 00:38:38 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:16.394 00:38:38 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:16.394 00:38:38 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:16.394 00:38:38 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:16.394 00:38:38 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:12:16.394 00:38:38 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:12:16.394 [2024-07-25 00:38:38.643376] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:16.394 [2024-07-25 00:38:38.643613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116031 ] 00:12:16.394 [2024-07-25 00:38:38.820540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.653 [2024-07-25 00:38:39.148496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.913 [2024-07-25 00:38:39.424900] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:17.481 [2024-07-25 00:38:40.013488] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:12:18.049 A filename is required. 00:12:18.049 00:38:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:12:18.049 00:38:40 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.049 00:38:40 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:12:18.049 00:38:40 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:12:18.049 00:38:40 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:12:18.049 00:38:40 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.049 00:12:18.049 real 0m1.905s 00:12:18.049 user 0m1.587s 00:12:18.049 sys 0m0.260s 00:12:18.049 00:38:40 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.049 ************************************ 00:12:18.049 END TEST accel_missing_filename 00:12:18.049 00:38:40 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:12:18.049 ************************************ 00:12:18.049 00:38:40 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:18.049 00:38:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:12:18.049 00:38:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.049 00:38:40 accel -- common/autotest_common.sh@10 -- # set +x 00:12:18.049 ************************************ 00:12:18.049 START TEST accel_compress_verify 00:12:18.049 ************************************ 00:12:18.049 00:38:40 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:18.049 00:38:40 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:12:18.049 00:38:40 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:18.049 00:38:40 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:18.049 00:38:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.049 00:38:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:18.049 00:38:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.050 00:38:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:18.050 00:38:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:18.050 00:38:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:18.050 00:38:40 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:18.050 00:38:40 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:18.050 00:38:40 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:18.050 00:38:40 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:18.050 00:38:40 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:18.050 00:38:40 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:18.050 00:38:40 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:12:18.050 [2024-07-25 00:38:40.616312] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:18.050 [2024-07-25 00:38:40.616546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116085 ] 00:12:18.310 [2024-07-25 00:38:40.797218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.570 [2024-07-25 00:38:41.073974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.830 [2024-07-25 00:38:41.356622] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:19.399 [2024-07-25 00:38:41.951103] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:12:19.967 00:12:19.967 Compression does not support the verify option, aborting. 00:12:19.967 00:38:42 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:12:19.967 00:38:42 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:19.967 00:38:42 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:12:19.967 00:38:42 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:12:19.967 00:38:42 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:12:19.967 00:38:42 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:19.967 00:12:19.967 real 0m1.879s 00:12:19.967 user 0m1.546s 00:12:19.967 sys 0m0.280s 00:12:19.967 00:38:42 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.967 00:38:42 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:12:19.967 ************************************ 00:12:19.968 END TEST accel_compress_verify 00:12:19.968 ************************************ 00:12:19.968 00:38:42 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:19.968 00:38:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:19.968 00:38:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.968 00:38:42 accel -- common/autotest_common.sh@10 -- # set +x 00:12:19.968 ************************************ 00:12:19.968 START TEST accel_wrong_workload 00:12:19.968 ************************************ 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:12:19.968 00:38:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:19.968 00:38:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:12:19.968 00:38:42 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:19.968 00:38:42 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:19.968 00:38:42 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.968 00:38:42 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.968 00:38:42 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:19.968 00:38:42 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:12:19.968 00:38:42 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:12:19.968 Unsupported workload type: foobar 00:12:19.968 [2024-07-25 00:38:42.558840] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:19.968 accel_perf options: 00:12:19.968 [-h help message] 00:12:19.968 [-q queue depth per core] 00:12:19.968 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:19.968 [-T number of threads per core 00:12:19.968 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:19.968 [-t time in seconds] 00:12:19.968 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:19.968 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:19.968 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:19.968 [-l for compress/decompress workloads, name of uncompressed input file 00:12:19.968 [-S for crc32c workload, use this seed value (default 0) 00:12:19.968 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:19.968 [-f for fill workload, use this BYTE value (default 255) 00:12:19.968 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:19.968 [-y verify result if this switch is on] 00:12:19.968 [-a tasks to allocate per core (default: same value as -q)] 00:12:19.968 Can be used to spread operations across a wider range of memory. 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:19.968 00:12:19.968 real 0m0.084s 00:12:19.968 user 0m0.079s 00:12:19.968 sys 0m0.050s 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.968 00:38:42 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:12:19.968 ************************************ 00:12:19.968 END TEST accel_wrong_workload 00:12:19.968 ************************************ 00:12:20.228 00:38:42 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:20.228 00:38:42 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:12:20.228 00:38:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.228 00:38:42 accel -- common/autotest_common.sh@10 -- # set +x 00:12:20.228 ************************************ 00:12:20.228 START TEST accel_negative_buffers 00:12:20.228 ************************************ 00:12:20.228 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:20.228 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:12:20.228 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:20.228 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:20.228 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.228 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:20.228 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.228 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:12:20.228 00:38:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:20.228 00:38:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:12:20.228 00:38:42 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:20.228 00:38:42 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:20.228 00:38:42 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:20.228 00:38:42 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:20.228 00:38:42 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:20.228 00:38:42 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:12:20.228 00:38:42 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:12:20.228 -x option must be non-negative. 00:12:20.228 [2024-07-25 00:38:42.713582] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:20.228 accel_perf options: 00:12:20.228 [-h help message] 00:12:20.228 [-q queue depth per core] 00:12:20.228 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:20.228 [-T number of threads per core 00:12:20.228 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:20.228 [-t time in seconds] 00:12:20.228 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:20.228 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:20.228 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:20.228 [-l for compress/decompress workloads, name of uncompressed input file 00:12:20.228 [-S for crc32c workload, use this seed value (default 0) 00:12:20.228 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:20.228 [-f for fill workload, use this BYTE value (default 255) 00:12:20.228 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:20.228 [-y verify result if this switch is on] 00:12:20.228 [-a tasks to allocate per core (default: same value as -q)] 00:12:20.228 Can be used to spread operations across a wider range of memory. 00:12:20.228 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:12:20.228 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:20.229 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:20.229 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:20.229 00:12:20.229 real 0m0.085s 00:12:20.229 user 0m0.075s 00:12:20.229 sys 0m0.048s 00:12:20.229 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.229 00:38:42 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:12:20.229 ************************************ 00:12:20.229 END TEST accel_negative_buffers 00:12:20.229 ************************************ 00:12:20.229 00:38:42 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:20.229 00:38:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:20.229 00:38:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.229 00:38:42 accel -- common/autotest_common.sh@10 -- # set +x 00:12:20.229 ************************************ 00:12:20.229 START TEST accel_crc32c 00:12:20.229 ************************************ 00:12:20.229 00:38:42 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:20.229 00:38:42 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:20.229 [2024-07-25 00:38:42.865224] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:20.229 [2024-07-25 00:38:42.865460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116191 ] 00:12:20.489 [2024-07-25 00:38:43.044671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.749 [2024-07-25 00:38:43.341519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:21.009 00:38:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:23.546 00:38:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:23.546 00:12:23.546 real 0m2.934s 00:12:23.546 user 0m2.580s 00:12:23.546 sys 0m0.289s 00:12:23.546 00:38:45 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.546 00:38:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:23.546 ************************************ 00:12:23.546 END TEST accel_crc32c 00:12:23.546 ************************************ 00:12:23.546 00:38:45 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:23.546 00:38:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:23.546 00:38:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.546 00:38:45 accel -- common/autotest_common.sh@10 -- # set +x 00:12:23.546 ************************************ 00:12:23.546 START TEST accel_crc32c_C2 00:12:23.546 ************************************ 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:23.547 00:38:45 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:23.547 [2024-07-25 00:38:45.864560] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:23.547 [2024-07-25 00:38:45.865032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116242 ] 00:12:23.547 [2024-07-25 00:38:46.032776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.806 [2024-07-25 00:38:46.344540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:24.067 00:38:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:26.608 00:12:26.608 real 0m2.915s 00:12:26.608 user 0m2.586s 00:12:26.608 sys 0m0.246s 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:26.608 00:38:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:26.608 ************************************ 00:12:26.608 END TEST accel_crc32c_C2 00:12:26.608 ************************************ 00:12:26.608 00:38:48 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:26.608 00:38:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:26.608 00:38:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.608 00:38:48 accel -- common/autotest_common.sh@10 -- # set +x 00:12:26.608 ************************************ 00:12:26.608 START TEST accel_copy 00:12:26.608 ************************************ 00:12:26.608 00:38:48 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:26.608 00:38:48 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:12:26.608 [2024-07-25 00:38:48.866352] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:26.608 [2024-07-25 00:38:48.867271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116305 ] 00:12:26.608 [2024-07-25 00:38:49.061093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.868 [2024-07-25 00:38:49.366486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:12:27.128 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.129 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.129 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.129 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:27.129 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.129 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.129 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.129 00:38:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:27.129 00:38:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:27.129 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.129 00:38:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.671 00:38:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:29.672 00:38:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:29.672 00:12:29.672 real 0m2.950s 00:12:29.672 user 0m2.566s 00:12:29.672 sys 0m0.296s 00:12:29.672 00:38:51 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.672 00:38:51 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:12:29.672 ************************************ 00:12:29.672 END TEST accel_copy 00:12:29.672 ************************************ 00:12:29.672 00:38:51 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:29.673 00:38:51 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:29.673 00:38:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.673 00:38:51 accel -- common/autotest_common.sh@10 -- # set +x 00:12:29.673 ************************************ 00:12:29.673 START TEST accel_fill 00:12:29.673 ************************************ 00:12:29.673 00:38:51 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:12:29.673 00:38:51 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:12:29.673 [2024-07-25 00:38:51.875889] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:29.673 [2024-07-25 00:38:51.876384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116370 ] 00:12:29.673 [2024-07-25 00:38:52.054680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.943 [2024-07-25 00:38:52.348329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.202 00:38:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:32.114 00:38:54 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:32.115 00:38:54 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:32.115 00:38:54 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:32.115 00:12:32.115 real 0m2.904s 00:12:32.115 user 0m2.562s 00:12:32.115 sys 0m0.265s 00:12:32.115 00:38:54 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.115 00:38:54 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:12:32.115 ************************************ 00:12:32.115 END TEST accel_fill 00:12:32.115 ************************************ 00:12:32.378 00:38:54 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:32.378 00:38:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:32.378 00:38:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.378 00:38:54 accel -- common/autotest_common.sh@10 -- # set +x 00:12:32.378 ************************************ 00:12:32.378 START TEST accel_copy_crc32c 00:12:32.378 ************************************ 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:32.378 00:38:54 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:32.378 [2024-07-25 00:38:54.839027] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:32.379 [2024-07-25 00:38:54.839373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116434 ] 00:12:32.379 [2024-07-25 00:38:55.000601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.949 [2024-07-25 00:38:55.301535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:33.208 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.209 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.209 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.209 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.209 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.209 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.209 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.209 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.209 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.209 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.209 00:38:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:35.114 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:35.115 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:35.115 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:35.115 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:35.115 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:35.115 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:35.115 00:38:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:35.115 00:12:35.115 real 0m2.885s 00:12:35.115 user 0m2.547s 00:12:35.115 sys 0m0.257s 00:12:35.115 00:38:57 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.115 00:38:57 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:35.115 ************************************ 00:12:35.115 END TEST accel_copy_crc32c 00:12:35.115 ************************************ 00:12:35.115 00:38:57 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:35.115 00:38:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:35.115 00:38:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.115 00:38:57 accel -- common/autotest_common.sh@10 -- # set +x 00:12:35.115 ************************************ 00:12:35.115 START TEST accel_copy_crc32c_C2 00:12:35.115 ************************************ 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:35.115 00:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:35.374 [2024-07-25 00:38:57.790768] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:35.374 [2024-07-25 00:38:57.791809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116491 ] 00:12:35.374 [2024-07-25 00:38:57.973826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.634 [2024-07-25 00:38:58.268184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.204 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.205 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.205 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:36.205 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.205 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.205 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:36.205 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:36.205 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.205 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:36.205 00:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:38.112 00:12:38.112 real 0m2.893s 00:12:38.112 user 0m2.552s 00:12:38.112 sys 0m0.279s 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.112 00:39:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:38.112 ************************************ 00:12:38.112 END TEST accel_copy_crc32c_C2 00:12:38.112 ************************************ 00:12:38.112 00:39:00 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:38.112 00:39:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:38.112 00:39:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.112 00:39:00 accel -- common/autotest_common.sh@10 -- # set +x 00:12:38.112 ************************************ 00:12:38.112 START TEST accel_dualcast 00:12:38.112 ************************************ 00:12:38.112 00:39:00 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:12:38.112 00:39:00 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:12:38.112 [2024-07-25 00:39:00.736202] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:38.112 [2024-07-25 00:39:00.736668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116557 ] 00:12:38.372 [2024-07-25 00:39:00.904695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.632 [2024-07-25 00:39:01.202642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:38.913 00:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:41.453 00:39:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:41.453 00:12:41.453 real 0m2.881s 00:12:41.453 user 0m2.525s 00:12:41.453 sys 0m0.273s 00:12:41.453 00:39:03 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.453 00:39:03 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:12:41.453 ************************************ 00:12:41.453 END TEST accel_dualcast 00:12:41.453 ************************************ 00:12:41.453 00:39:03 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:41.453 00:39:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:41.453 00:39:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.453 00:39:03 accel -- common/autotest_common.sh@10 -- # set +x 00:12:41.453 ************************************ 00:12:41.453 START TEST accel_compare 00:12:41.453 ************************************ 00:12:41.453 00:39:03 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:12:41.453 00:39:03 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:12:41.453 [2024-07-25 00:39:03.668609] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:41.453 [2024-07-25 00:39:03.669413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116615 ] 00:12:41.453 [2024-07-25 00:39:03.834408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.712 [2024-07-25 00:39:04.135765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:41.970 00:39:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:43.876 00:39:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:43.876 00:39:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:43.876 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:43.876 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:43.876 00:39:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:43.877 00:39:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:43.877 00:12:43.877 real 0m2.866s 00:12:43.877 user 0m2.540s 00:12:43.877 sys 0m0.249s 00:12:43.877 00:39:06 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:43.877 00:39:06 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:12:43.877 ************************************ 00:12:43.877 END TEST accel_compare 00:12:43.877 ************************************ 00:12:44.136 00:39:06 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:44.136 00:39:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:44.136 00:39:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.136 00:39:06 accel -- common/autotest_common.sh@10 -- # set +x 00:12:44.136 ************************************ 00:12:44.136 START TEST accel_xor 00:12:44.136 ************************************ 00:12:44.136 00:39:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:12:44.136 00:39:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:44.136 00:39:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:44.136 00:39:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.136 00:39:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.136 00:39:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:44.136 00:39:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:44.136 00:39:06 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:44.136 00:39:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:44.136 00:39:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:44.137 00:39:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:44.137 00:39:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:44.137 00:39:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:44.137 00:39:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:44.137 00:39:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:44.137 [2024-07-25 00:39:06.603738] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:44.137 [2024-07-25 00:39:06.604198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116678 ] 00:12:44.137 [2024-07-25 00:39:06.771125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.705 [2024-07-25 00:39:07.061830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.705 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:44.964 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.965 00:39:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:46.872 00:12:46.872 real 0m2.862s 00:12:46.872 user 0m2.548s 00:12:46.872 sys 0m0.250s 00:12:46.872 00:39:09 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.872 00:39:09 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:46.872 ************************************ 00:12:46.872 END TEST accel_xor 00:12:46.872 ************************************ 00:12:46.872 00:39:09 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:46.872 00:39:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:46.872 00:39:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.872 00:39:09 accel -- common/autotest_common.sh@10 -- # set +x 00:12:46.872 ************************************ 00:12:46.872 START TEST accel_xor 00:12:46.872 ************************************ 00:12:46.872 00:39:09 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:46.872 00:39:09 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:47.131 [2024-07-25 00:39:09.533358] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:47.131 [2024-07-25 00:39:09.533949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116732 ] 00:12:47.132 [2024-07-25 00:39:09.721359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.390 [2024-07-25 00:39:10.018435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.984 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.985 00:39:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:49.888 00:39:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:49.888 00:39:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:49.888 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:49.888 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:49.888 00:39:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:49.888 00:39:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:49.888 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:49.889 00:39:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:49.889 00:12:49.889 real 0m2.923s 00:12:49.889 user 0m2.592s 00:12:49.889 sys 0m0.278s 00:12:49.889 00:39:12 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.889 00:39:12 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:49.889 ************************************ 00:12:49.889 END TEST accel_xor 00:12:49.889 ************************************ 00:12:49.889 00:39:12 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:49.889 00:39:12 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:49.889 00:39:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.889 00:39:12 accel -- common/autotest_common.sh@10 -- # set +x 00:12:49.889 ************************************ 00:12:49.889 START TEST accel_dif_verify 00:12:49.889 ************************************ 00:12:49.889 00:39:12 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:49.889 00:39:12 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:12:49.889 [2024-07-25 00:39:12.506717] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:49.889 [2024-07-25 00:39:12.507333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116793 ] 00:12:50.148 [2024-07-25 00:39:12.690355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.406 [2024-07-25 00:39:12.987555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.665 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.666 00:39:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:53.200 ************************************ 00:12:53.200 END TEST accel_dif_verify 00:12:53.200 ************************************ 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:53.200 00:39:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:53.200 00:12:53.200 real 0m2.922s 00:12:53.200 user 0m2.585s 00:12:53.200 sys 0m0.258s 00:12:53.200 00:39:15 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.200 00:39:15 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:12:53.200 00:39:15 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:53.200 00:39:15 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:53.200 00:39:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.200 00:39:15 accel -- common/autotest_common.sh@10 -- # set +x 00:12:53.200 ************************************ 00:12:53.200 START TEST accel_dif_generate 00:12:53.200 ************************************ 00:12:53.200 00:39:15 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:12:53.200 00:39:15 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:12:53.200 [2024-07-25 00:39:15.482671] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:53.200 [2024-07-25 00:39:15.483006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116857 ] 00:12:53.200 [2024-07-25 00:39:15.670404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.459 [2024-07-25 00:39:15.974036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.719 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:53.720 00:39:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:56.253 00:39:18 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:56.253 00:12:56.253 real 0m2.941s 00:12:56.253 user 0m2.594s 00:12:56.253 sys 0m0.256s 00:12:56.253 00:39:18 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.253 00:39:18 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:12:56.253 ************************************ 00:12:56.253 END TEST accel_dif_generate 00:12:56.253 ************************************ 00:12:56.253 00:39:18 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:56.253 00:39:18 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:56.253 00:39:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.253 00:39:18 accel -- common/autotest_common.sh@10 -- # set +x 00:12:56.253 ************************************ 00:12:56.253 START TEST accel_dif_generate_copy 00:12:56.253 ************************************ 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:56.253 00:39:18 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:12:56.253 [2024-07-25 00:39:18.476181] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:56.253 [2024-07-25 00:39:18.476454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116915 ] 00:12:56.253 [2024-07-25 00:39:18.653388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.511 [2024-07-25 00:39:18.949526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.770 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:56.771 00:39:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:59.305 00:12:59.305 real 0m2.938s 00:12:59.305 user 0m2.566s 00:12:59.305 sys 0m0.303s 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.305 ************************************ 00:12:59.305 00:39:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:12:59.305 END TEST accel_dif_generate_copy 00:12:59.305 ************************************ 00:12:59.305 00:39:21 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:59.305 00:39:21 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:59.305 00:39:21 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:12:59.305 00:39:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.305 00:39:21 accel -- common/autotest_common.sh@10 -- # set +x 00:12:59.305 ************************************ 00:12:59.305 START TEST accel_comp 00:12:59.305 ************************************ 00:12:59.305 00:39:21 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:12:59.305 00:39:21 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:12:59.305 [2024-07-25 00:39:21.465177] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:12:59.305 [2024-07-25 00:39:21.465643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116981 ] 00:12:59.305 [2024-07-25 00:39:21.632451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.305 [2024-07-25 00:39:21.950325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:59.874 00:39:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:01.777 00:39:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:01.777 00:39:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:01.777 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:01.777 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:01.777 00:39:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:01.777 00:39:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:01.777 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:01.777 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:13:01.778 00:39:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:01.778 00:13:01.778 real 0m2.971s 00:13:01.778 user 0m2.607s 00:13:01.778 sys 0m0.272s 00:13:01.778 00:39:24 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:01.778 ************************************ 00:13:01.778 END TEST accel_comp 00:13:01.778 ************************************ 00:13:01.778 00:39:24 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:13:02.037 00:39:24 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:02.037 00:39:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:13:02.037 00:39:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.037 00:39:24 accel -- common/autotest_common.sh@10 -- # set +x 00:13:02.037 ************************************ 00:13:02.037 START TEST accel_decomp 00:13:02.037 ************************************ 00:13:02.037 00:39:24 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:13:02.037 00:39:24 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:13:02.037 [2024-07-25 00:39:24.494319] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:02.037 [2024-07-25 00:39:24.494599] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117044 ] 00:13:02.037 [2024-07-25 00:39:24.678242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.605 [2024-07-25 00:39:24.977992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.864 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:02.864 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.864 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.864 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.864 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:02.865 00:39:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:04.832 00:39:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:04.832 00:13:04.832 real 0m2.914s 00:13:04.832 user 0m2.592s 00:13:04.832 sys 0m0.269s 00:13:04.832 00:39:27 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:04.832 00:39:27 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:13:04.832 ************************************ 00:13:04.832 END TEST accel_decomp 00:13:04.832 ************************************ 00:13:04.832 00:39:27 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:04.832 00:39:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:04.832 00:39:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.832 00:39:27 accel -- common/autotest_common.sh@10 -- # set +x 00:13:04.832 ************************************ 00:13:04.832 START TEST accel_decomp_full 00:13:04.832 ************************************ 00:13:04.832 00:39:27 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:13:04.832 00:39:27 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:13:04.832 [2024-07-25 00:39:27.457208] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:04.832 [2024-07-25 00:39:27.457361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117102 ] 00:13:05.108 [2024-07-25 00:39:27.618411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.368 [2024-07-25 00:39:27.919945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:05.627 00:39:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:08.166 00:39:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:08.166 00:13:08.166 real 0m2.891s 00:13:08.166 user 0m2.554s 00:13:08.166 sys 0m0.275s 00:13:08.166 00:39:30 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.166 ************************************ 00:13:08.166 END TEST accel_decomp_full 00:13:08.166 ************************************ 00:13:08.166 00:39:30 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:13:08.166 00:39:30 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:08.166 00:39:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:08.166 00:39:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.166 00:39:30 accel -- common/autotest_common.sh@10 -- # set +x 00:13:08.166 ************************************ 00:13:08.166 START TEST accel_decomp_mcore 00:13:08.166 ************************************ 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:08.166 00:39:30 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:08.166 [2024-07-25 00:39:30.440060] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:08.166 [2024-07-25 00:39:30.440361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117165 ] 00:13:08.166 [2024-07-25 00:39:30.645945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.424 [2024-07-25 00:39:30.947840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.424 [2024-07-25 00:39:30.948029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.424 [2024-07-25 00:39:30.948389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.424 [2024-07-25 00:39:30.948390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.682 00:39:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:11.211 00:13:11.211 real 0m3.003s 00:13:11.211 user 0m8.432s 00:13:11.211 sys 0m0.304s 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.211 00:39:33 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:11.211 ************************************ 00:13:11.211 END TEST accel_decomp_mcore 00:13:11.211 ************************************ 00:13:11.211 00:39:33 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:11.211 00:39:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:11.211 00:39:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.211 00:39:33 accel -- common/autotest_common.sh@10 -- # set +x 00:13:11.211 ************************************ 00:13:11.211 START TEST accel_decomp_full_mcore 00:13:11.211 ************************************ 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:11.211 00:39:33 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:11.211 [2024-07-25 00:39:33.483127] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:11.211 [2024-07-25 00:39:33.483306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117233 ] 00:13:11.211 [2024-07-25 00:39:33.664649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.470 [2024-07-25 00:39:33.960305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.470 [2024-07-25 00:39:33.960487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.470 [2024-07-25 00:39:33.960905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.470 [2024-07-25 00:39:33.960907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:11.759 00:39:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.301 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.301 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:14.302 00:13:14.302 real 0m2.985s 00:13:14.302 user 0m8.529s 00:13:14.302 sys 0m0.253s 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:14.302 00:39:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:14.302 ************************************ 00:13:14.302 END TEST accel_decomp_full_mcore 00:13:14.302 ************************************ 00:13:14.302 00:39:36 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:14.302 00:39:36 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:14.302 00:39:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.302 00:39:36 accel -- common/autotest_common.sh@10 -- # set +x 00:13:14.302 ************************************ 00:13:14.302 START TEST accel_decomp_mthread 00:13:14.302 ************************************ 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:14.302 00:39:36 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:14.302 [2024-07-25 00:39:36.544660] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:14.302 [2024-07-25 00:39:36.544885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117294 ] 00:13:14.302 [2024-07-25 00:39:36.724670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.561 [2024-07-25 00:39:37.033229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.820 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.821 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:14.821 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.821 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.821 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.821 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:14.821 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.821 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.821 00:39:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:17.357 00:13:17.357 real 0m2.925s 00:13:17.357 user 0m2.589s 00:13:17.357 sys 0m0.260s 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.357 00:39:39 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:17.357 ************************************ 00:13:17.357 END TEST accel_decomp_mthread 00:13:17.357 ************************************ 00:13:17.357 00:39:39 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:17.357 00:39:39 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:17.357 00:39:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.357 00:39:39 accel -- common/autotest_common.sh@10 -- # set +x 00:13:17.357 ************************************ 00:13:17.357 START TEST accel_decomp_full_mthread 00:13:17.357 ************************************ 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:17.357 00:39:39 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:17.357 [2024-07-25 00:39:39.542593] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:17.358 [2024-07-25 00:39:39.543427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117357 ] 00:13:17.358 [2024-07-25 00:39:39.722279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.616 [2024-07-25 00:39:40.035092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.875 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:17.876 00:39:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:19.782 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:20.041 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:20.042 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:20.042 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:20.042 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:20.042 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:20.042 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:20.042 00:39:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:20.042 00:13:20.042 real 0m2.970s 00:13:20.042 user 0m2.602s 00:13:20.042 sys 0m0.297s 00:13:20.042 00:39:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.042 00:39:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:20.042 ************************************ 00:13:20.042 END TEST accel_decomp_full_mthread 00:13:20.042 ************************************ 00:13:20.042 00:39:42 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:20.042 00:39:42 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:20.042 00:39:42 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:20.042 00:39:42 accel -- accel/accel.sh@137 -- # build_accel_config 00:13:20.042 00:39:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.042 00:39:42 accel -- common/autotest_common.sh@10 -- # set +x 00:13:20.042 00:39:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:20.042 00:39:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:20.042 00:39:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:20.042 00:39:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:20.042 00:39:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:20.042 00:39:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:20.042 00:39:42 accel -- accel/accel.sh@41 -- # jq -r . 00:13:20.042 ************************************ 00:13:20.042 START TEST accel_dif_functional_tests 00:13:20.042 ************************************ 00:13:20.042 00:39:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:20.042 [2024-07-25 00:39:42.617479] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:20.042 [2024-07-25 00:39:42.617701] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117422 ] 00:13:20.301 [2024-07-25 00:39:42.808182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.561 [2024-07-25 00:39:43.100741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.561 [2024-07-25 00:39:43.100928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.561 [2024-07-25 00:39:43.101127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.129 00:13:21.129 00:13:21.129 CUnit - A unit testing framework for C - Version 2.1-3 00:13:21.129 http://cunit.sourceforge.net/ 00:13:21.129 00:13:21.129 00:13:21.129 Suite: accel_dif 00:13:21.129 Test: verify: DIF generated, GUARD check ...passed 00:13:21.129 Test: verify: DIF generated, APPTAG check ...passed 00:13:21.129 Test: verify: DIF generated, REFTAG check ...passed 00:13:21.129 Test: verify: DIF not generated, GUARD check ...[2024-07-25 00:39:43.546056] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:21.129 passed 00:13:21.129 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 00:39:43.546555] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:21.129 passed 00:13:21.129 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 00:39:43.546875] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:21.129 passed 00:13:21.129 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:21.129 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 00:39:43.547290] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:21.129 passed 00:13:21.129 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:21.129 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:21.129 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:21.129 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 00:39:43.547914] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:21.129 passed 00:13:21.129 Test: verify copy: DIF generated, GUARD check ...passed 00:13:21.129 Test: verify copy: DIF generated, APPTAG check ...passed 00:13:21.129 Test: verify copy: DIF generated, REFTAG check ...passed 00:13:21.129 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 00:39:43.548657] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:21.129 passed 00:13:21.129 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 00:39:43.548950] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:21.129 passed 00:13:21.129 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 00:39:43.549175] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:21.129 passed 00:13:21.129 Test: generate copy: DIF generated, GUARD check ...passed 00:13:21.129 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:21.129 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:21.129 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:21.129 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:21.129 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:21.129 Test: generate copy: iovecs-len validate ...[2024-07-25 00:39:43.550646] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:21.129 passed 00:13:21.129 Test: generate copy: buffer alignment validate ...passed 00:13:21.129 00:13:21.129 Run Summary: Type Total Ran Passed Failed Inactive 00:13:21.129 suites 1 1 n/a 0 0 00:13:21.129 tests 26 26 26 0 0 00:13:21.129 asserts 115 115 115 0 n/a 00:13:21.129 00:13:21.129 Elapsed time = 0.017 seconds 00:13:22.505 00:13:22.505 real 0m2.586s 00:13:22.505 user 0m5.069s 00:13:22.505 sys 0m0.364s 00:13:22.505 00:39:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.505 ************************************ 00:13:22.505 END TEST accel_dif_functional_tests 00:13:22.505 ************************************ 00:13:22.505 00:39:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:13:22.505 ************************************ 00:13:22.505 END TEST accel 00:13:22.505 ************************************ 00:13:22.505 00:13:22.505 real 1m11.153s 00:13:22.505 user 1m16.617s 00:13:22.505 sys 0m7.999s 00:13:22.505 00:39:45 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.505 00:39:45 accel -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 00:39:45 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:22.763 00:39:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:22.763 00:39:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.763 00:39:45 -- common/autotest_common.sh@10 -- # set +x 00:13:22.763 ************************************ 00:13:22.763 START TEST accel_rpc 00:13:22.763 ************************************ 00:13:22.764 00:39:45 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:22.764 * Looking for test storage... 00:13:22.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:22.764 00:39:45 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:22.764 00:39:45 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=117514 00:13:22.764 00:39:45 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 117514 00:13:22.764 00:39:45 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:22.764 00:39:45 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 117514 ']' 00:13:22.764 00:39:45 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.764 00:39:45 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.764 00:39:45 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.764 00:39:45 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.764 00:39:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.023 [2024-07-25 00:39:45.422943] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:23.023 [2024-07-25 00:39:45.423154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117514 ] 00:13:23.023 [2024-07-25 00:39:45.609238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.282 [2024-07-25 00:39:45.911427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.850 00:39:46 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.850 00:39:46 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:23.850 00:39:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:23.850 00:39:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:23.850 00:39:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:23.850 00:39:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:23.850 00:39:46 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:23.850 00:39:46 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:23.850 00:39:46 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.850 00:39:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.850 ************************************ 00:13:23.850 START TEST accel_assign_opcode 00:13:23.850 ************************************ 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:23.850 [2024-07-25 00:39:46.444339] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:23.850 [2024-07-25 00:39:46.456301] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.850 00:39:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:24.788 00:39:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.789 00:39:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:24.789 00:39:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:24.789 00:39:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.789 00:39:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:24.789 00:39:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:13:24.789 00:39:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.789 software 00:13:24.789 00:13:24.789 real 0m0.996s 00:13:24.789 user 0m0.052s 00:13:24.789 sys 0m0.013s 00:13:24.789 00:39:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.789 00:39:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:24.789 ************************************ 00:13:24.789 END TEST accel_assign_opcode 00:13:24.789 ************************************ 00:13:25.048 00:39:47 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 117514 00:13:25.048 00:39:47 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 117514 ']' 00:13:25.048 00:39:47 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 117514 00:13:25.048 00:39:47 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:13:25.048 00:39:47 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.048 00:39:47 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117514 00:13:25.048 00:39:47 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:25.048 killing process with pid 117514 00:13:25.048 00:39:47 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:25.048 00:39:47 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117514' 00:13:25.048 00:39:47 accel_rpc -- common/autotest_common.sh@967 -- # kill 117514 00:13:25.048 00:39:47 accel_rpc -- common/autotest_common.sh@972 -- # wait 117514 00:13:28.338 ************************************ 00:13:28.338 END TEST accel_rpc 00:13:28.338 ************************************ 00:13:28.338 00:13:28.338 real 0m5.261s 00:13:28.338 user 0m5.130s 00:13:28.338 sys 0m0.766s 00:13:28.338 00:39:50 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.338 00:39:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.338 00:39:50 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:28.338 00:39:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:28.338 00:39:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.338 00:39:50 -- common/autotest_common.sh@10 -- # set +x 00:13:28.338 ************************************ 00:13:28.338 START TEST app_cmdline 00:13:28.338 ************************************ 00:13:28.338 00:39:50 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:28.338 * Looking for test storage... 00:13:28.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:28.338 00:39:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:28.338 00:39:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=117669 00:13:28.338 00:39:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 117669 00:13:28.338 00:39:50 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:28.338 00:39:50 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 117669 ']' 00:13:28.338 00:39:50 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.338 00:39:50 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.338 00:39:50 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.338 00:39:50 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.338 00:39:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:28.338 [2024-07-25 00:39:50.741200] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:28.338 [2024-07-25 00:39:50.741395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117669 ] 00:13:28.338 [2024-07-25 00:39:50.913675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.598 [2024-07-25 00:39:51.193480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.536 00:39:52 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.536 00:39:52 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:13:29.536 00:39:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:29.796 { 00:13:29.796 "version": "SPDK v24.09-pre git sha1 6e4acbb0d", 00:13:29.796 "fields": { 00:13:29.796 "major": 24, 00:13:29.796 "minor": 9, 00:13:29.796 "patch": 0, 00:13:29.796 "suffix": "-pre", 00:13:29.796 "commit": "6e4acbb0d" 00:13:29.796 } 00:13:29.796 } 00:13:29.796 00:39:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:29.796 00:39:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:29.796 00:39:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:29.796 00:39:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:29.796 00:39:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:29.796 00:39:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:29.796 00:39:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.796 00:39:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:29.796 00:39:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:29.796 00:39:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:29.796 00:39:52 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:30.055 request: 00:13:30.055 { 00:13:30.055 "method": "env_dpdk_get_mem_stats", 00:13:30.055 "req_id": 1 00:13:30.055 } 00:13:30.055 Got JSON-RPC error response 00:13:30.055 response: 00:13:30.055 { 00:13:30.055 "code": -32601, 00:13:30.055 "message": "Method not found" 00:13:30.055 } 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:30.055 00:39:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 117669 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 117669 ']' 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 117669 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117669 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:30.055 killing process with pid 117669 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117669' 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@967 -- # kill 117669 00:13:30.055 00:39:52 app_cmdline -- common/autotest_common.sh@972 -- # wait 117669 00:13:33.352 00:13:33.353 real 0m5.142s 00:13:33.353 user 0m5.414s 00:13:33.353 sys 0m0.746s 00:13:33.353 00:39:55 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.353 ************************************ 00:13:33.353 END TEST app_cmdline 00:13:33.353 ************************************ 00:13:33.353 00:39:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:33.353 00:39:55 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:33.353 00:39:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:33.353 00:39:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.353 00:39:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.353 ************************************ 00:13:33.353 START TEST version 00:13:33.353 ************************************ 00:13:33.353 00:39:55 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:33.353 * Looking for test storage... 00:13:33.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:33.353 00:39:55 version -- app/version.sh@17 -- # get_header_version major 00:13:33.353 00:39:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:33.353 00:39:55 version -- app/version.sh@14 -- # cut -f2 00:13:33.353 00:39:55 version -- app/version.sh@14 -- # tr -d '"' 00:13:33.353 00:39:55 version -- app/version.sh@17 -- # major=24 00:13:33.353 00:39:55 version -- app/version.sh@18 -- # get_header_version minor 00:13:33.353 00:39:55 version -- app/version.sh@14 -- # cut -f2 00:13:33.353 00:39:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:33.353 00:39:55 version -- app/version.sh@14 -- # tr -d '"' 00:13:33.353 00:39:55 version -- app/version.sh@18 -- # minor=9 00:13:33.353 00:39:55 version -- app/version.sh@19 -- # get_header_version patch 00:13:33.353 00:39:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:33.353 00:39:55 version -- app/version.sh@14 -- # cut -f2 00:13:33.353 00:39:55 version -- app/version.sh@14 -- # tr -d '"' 00:13:33.353 00:39:55 version -- app/version.sh@19 -- # patch=0 00:13:33.353 00:39:55 version -- app/version.sh@20 -- # get_header_version suffix 00:13:33.353 00:39:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:33.353 00:39:55 version -- app/version.sh@14 -- # tr -d '"' 00:13:33.353 00:39:55 version -- app/version.sh@14 -- # cut -f2 00:13:33.353 00:39:55 version -- app/version.sh@20 -- # suffix=-pre 00:13:33.353 00:39:55 version -- app/version.sh@22 -- # version=24.9 00:13:33.353 00:39:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:33.353 00:39:55 version -- app/version.sh@28 -- # version=24.9rc0 00:13:33.353 00:39:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:33.353 00:39:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:33.353 00:39:55 version -- app/version.sh@30 -- # py_version=24.9rc0 00:13:33.353 00:39:55 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:13:33.353 00:13:33.353 real 0m0.193s 00:13:33.353 user 0m0.090s 00:13:33.353 sys 0m0.151s 00:13:33.353 00:39:55 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.353 ************************************ 00:13:33.353 END TEST version 00:13:33.353 00:39:55 version -- common/autotest_common.sh@10 -- # set +x 00:13:33.353 ************************************ 00:13:33.353 00:39:55 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:13:33.353 00:39:55 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:33.353 00:39:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:33.353 00:39:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.353 00:39:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.612 ************************************ 00:13:33.612 START TEST blockdev_general 00:13:33.612 ************************************ 00:13:33.612 00:39:56 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:33.612 * Looking for test storage... 00:13:33.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:33.612 00:39:56 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=117864 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 117864 00:13:33.612 00:39:56 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:13:33.612 00:39:56 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 117864 ']' 00:13:33.612 00:39:56 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.612 00:39:56 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.612 00:39:56 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.612 00:39:56 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.612 00:39:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:33.612 [2024-07-25 00:39:56.236119] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:33.612 [2024-07-25 00:39:56.236376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117864 ] 00:13:33.871 [2024-07-25 00:39:56.424816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.129 [2024-07-25 00:39:56.754534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.695 00:39:57 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.695 00:39:57 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:13:34.695 00:39:57 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:34.695 00:39:57 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:13:34.695 00:39:57 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:13:34.695 00:39:57 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.695 00:39:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:36.069 [2024-07-25 00:39:58.331030] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:36.069 [2024-07-25 00:39:58.331194] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:36.069 00:13:36.069 [2024-07-25 00:39:58.338996] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:36.069 [2024-07-25 00:39:58.339061] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:36.069 00:13:36.069 Malloc0 00:13:36.069 Malloc1 00:13:36.069 Malloc2 00:13:36.069 Malloc3 00:13:36.069 Malloc4 00:13:36.069 Malloc5 00:13:36.327 Malloc6 00:13:36.327 Malloc7 00:13:36.327 Malloc8 00:13:36.327 Malloc9 00:13:36.327 [2024-07-25 00:39:58.899731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:36.327 [2024-07-25 00:39:58.899857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.327 [2024-07-25 00:39:58.899906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:36.327 [2024-07-25 00:39:58.899985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.327 [2024-07-25 00:39:58.903091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.327 [2024-07-25 00:39:58.903165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:36.327 TestPT 00:13:36.327 00:39:58 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.327 00:39:58 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:13:36.586 5000+0 records in 00:13:36.586 5000+0 records out 00:13:36.586 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0343712 s, 298 MB/s 00:13:36.586 00:39:58 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:13:36.586 00:39:58 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.586 00:39:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:36.586 AIO0 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.586 00:39:59 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.586 00:39:59 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:13:36.586 00:39:59 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.586 00:39:59 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.586 00:39:59 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.586 00:39:59 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:36.586 00:39:59 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:36.586 00:39:59 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:36.586 00:39:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.846 00:39:59 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:36.846 00:39:59 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:36.847 00:39:59 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "99111a81-7ddf-4a3c-b43c-14cc76d2a669"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "99111a81-7ddf-4a3c-b43c-14cc76d2a669",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8a914ad3-f0c5-57b6-92c9-b59e212e34ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8a914ad3-f0c5-57b6-92c9-b59e212e34ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "d3d85ed9-95a5-585c-a9c9-74446b461d2d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d3d85ed9-95a5-585c-a9c9-74446b461d2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2c9af94f-eb35-5243-be03-d03df185e573"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2c9af94f-eb35-5243-be03-d03df185e573",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "50641a2f-c3ed-5896-869c-1c582344f85b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "50641a2f-c3ed-5896-869c-1c582344f85b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8eedd088-cf59-50a5-a5c9-456c62d94edd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8eedd088-cf59-50a5-a5c9-456c62d94edd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "c4f40913-4490-574d-97c0-6ddbb4682087"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c4f40913-4490-574d-97c0-6ddbb4682087",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7a991fb7-d70e-501d-989e-9ea68f953ed6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a991fb7-d70e-501d-989e-9ea68f953ed6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "339cc1ec-0b15-51fd-9ab1-3736793224cc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "339cc1ec-0b15-51fd-9ab1-3736793224cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a298d54d-c0c6-556e-a356-c56904c82bca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a298d54d-c0c6-556e-a356-c56904c82bca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "28ffd764-71b6-5844-8330-10b328e7ea73"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "28ffd764-71b6-5844-8330-10b328e7ea73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "d6b7c723-30a8-51b5-997b-6f2e41a80b15"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d6b7c723-30a8-51b5-997b-6f2e41a80b15",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "656ed422-5158-4d11-9df2-fedbefd37638"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "656ed422-5158-4d11-9df2-fedbefd37638",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "656ed422-5158-4d11-9df2-fedbefd37638",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "eb959b9d-7d5a-4524-866e-376fceea1221",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8a3ebfc3-5e6a-45ed-909b-85cca8a41453",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "0c99bb12-beb1-4319-8b92-8e190fe77031"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0c99bb12-beb1-4319-8b92-8e190fe77031",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0c99bb12-beb1-4319-8b92-8e190fe77031",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7dabdf71-fec0-49a3-a9f0-56996efcbd05",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "6d8feb96-4fd9-4b9c-a15a-82352374e35e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "6791834a-bbcc-48b1-8047-42fb6ad87a55"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6791834a-bbcc-48b1-8047-42fb6ad87a55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6791834a-bbcc-48b1-8047-42fb6ad87a55",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "972215ae-6c02-45dd-89f6-befabbef6398",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3780acce-abd8-4db9-9595-58308e0ba922",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "95067689-d695-47e7-b140-512dcda8de2e"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "95067689-d695-47e7-b140-512dcda8de2e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:36.847 00:39:59 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:36.847 00:39:59 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:13:36.847 00:39:59 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:36.847 00:39:59 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 117864 00:13:36.847 00:39:59 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 117864 ']' 00:13:36.847 00:39:59 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 117864 00:13:36.847 00:39:59 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:13:36.847 00:39:59 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.847 00:39:59 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117864 00:13:36.847 00:39:59 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:36.847 killing process with pid 117864 00:13:36.847 00:39:59 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:36.847 00:39:59 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117864' 00:13:36.847 00:39:59 blockdev_general -- common/autotest_common.sh@967 -- # kill 117864 00:13:36.847 00:39:59 blockdev_general -- common/autotest_common.sh@972 -- # wait 117864 00:13:41.038 00:40:03 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:41.038 00:40:03 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:41.038 00:40:03 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:13:41.038 00:40:03 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.038 00:40:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:41.038 ************************************ 00:13:41.038 START TEST bdev_hello_world 00:13:41.038 ************************************ 00:13:41.038 00:40:03 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:41.326 [2024-07-25 00:40:03.769576] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:41.326 [2024-07-25 00:40:03.769803] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117975 ] 00:13:41.326 [2024-07-25 00:40:03.951155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.618 [2024-07-25 00:40:04.222961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.188 [2024-07-25 00:40:04.700796] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:42.188 [2024-07-25 00:40:04.700908] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:42.188 [2024-07-25 00:40:04.708719] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:42.188 [2024-07-25 00:40:04.708775] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:42.188 [2024-07-25 00:40:04.716738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:42.188 [2024-07-25 00:40:04.716819] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:42.188 [2024-07-25 00:40:04.716870] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:42.447 [2024-07-25 00:40:04.953712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:42.447 [2024-07-25 00:40:04.953842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.447 [2024-07-25 00:40:04.953889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:42.447 [2024-07-25 00:40:04.953924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.447 [2024-07-25 00:40:04.956727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.447 [2024-07-25 00:40:04.956785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:42.706 [2024-07-25 00:40:05.327093] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:42.706 [2024-07-25 00:40:05.327200] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:13:42.706 [2024-07-25 00:40:05.327300] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:42.706 [2024-07-25 00:40:05.327409] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:42.706 [2024-07-25 00:40:05.327547] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:42.706 [2024-07-25 00:40:05.327581] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:42.706 [2024-07-25 00:40:05.327643] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:42.706 00:13:42.706 [2024-07-25 00:40:05.327700] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:45.240 00:13:45.240 real 0m4.213s 00:13:45.240 user 0m3.533s 00:13:45.240 sys 0m0.528s 00:13:45.240 00:40:07 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:45.240 00:40:07 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:45.240 ************************************ 00:13:45.240 END TEST bdev_hello_world 00:13:45.240 ************************************ 00:13:45.499 00:40:07 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:45.499 00:40:07 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:45.499 00:40:07 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.499 00:40:07 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:45.499 ************************************ 00:13:45.499 START TEST bdev_bounds 00:13:45.499 ************************************ 00:13:45.499 00:40:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:13:45.499 00:40:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=118049 00:13:45.499 00:40:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:45.499 00:40:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 118049' 00:13:45.499 Process bdevio pid: 118049 00:13:45.499 00:40:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:45.499 00:40:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 118049 00:13:45.499 00:40:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 118049 ']' 00:13:45.499 00:40:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.499 00:40:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.499 00:40:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.500 00:40:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.500 00:40:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:45.500 [2024-07-25 00:40:08.030664] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:45.500 [2024-07-25 00:40:08.030860] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118049 ] 00:13:45.759 [2024-07-25 00:40:08.206368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:46.033 [2024-07-25 00:40:08.469780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.033 [2024-07-25 00:40:08.469934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.033 [2024-07-25 00:40:08.469940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.294 [2024-07-25 00:40:08.921202] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:46.294 [2024-07-25 00:40:08.921326] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:46.294 [2024-07-25 00:40:08.929097] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:46.294 [2024-07-25 00:40:08.929159] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:46.294 [2024-07-25 00:40:08.937128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:46.294 [2024-07-25 00:40:08.937209] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:46.295 [2024-07-25 00:40:08.937236] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:46.553 [2024-07-25 00:40:09.168167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:46.553 [2024-07-25 00:40:09.168284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.553 [2024-07-25 00:40:09.168330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:46.553 [2024-07-25 00:40:09.168355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.554 [2024-07-25 00:40:09.171571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.554 [2024-07-25 00:40:09.171642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:47.122 00:40:09 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.122 00:40:09 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:13:47.122 00:40:09 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:47.122 I/O targets: 00:13:47.122 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:13:47.122 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:13:47.122 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:13:47.122 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:13:47.122 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:13:47.122 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:13:47.122 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:13:47.122 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:13:47.122 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:13:47.122 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:13:47.122 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:13:47.122 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:13:47.122 raid0: 131072 blocks of 512 bytes (64 MiB) 00:13:47.122 concat0: 131072 blocks of 512 bytes (64 MiB) 00:13:47.122 raid1: 65536 blocks of 512 bytes (32 MiB) 00:13:47.122 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:13:47.122 00:13:47.122 00:13:47.122 CUnit - A unit testing framework for C - Version 2.1-3 00:13:47.122 http://cunit.sourceforge.net/ 00:13:47.122 00:13:47.122 00:13:47.122 Suite: bdevio tests on: AIO0 00:13:47.122 Test: blockdev write read block ...passed 00:13:47.122 Test: blockdev write zeroes read block ...passed 00:13:47.122 Test: blockdev write zeroes read no split ...passed 00:13:47.122 Test: blockdev write zeroes read split ...passed 00:13:47.122 Test: blockdev write zeroes read split partial ...passed 00:13:47.122 Test: blockdev reset ...passed 00:13:47.122 Test: blockdev write read 8 blocks ...passed 00:13:47.122 Test: blockdev write read size > 128k ...passed 00:13:47.122 Test: blockdev write read invalid size ...passed 00:13:47.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.122 Test: blockdev write read max offset ...passed 00:13:47.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.122 Test: blockdev writev readv 8 blocks ...passed 00:13:47.122 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.122 Test: blockdev writev readv block ...passed 00:13:47.122 Test: blockdev writev readv size > 128k ...passed 00:13:47.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.122 Test: blockdev comparev and writev ...passed 00:13:47.122 Test: blockdev nvme passthru rw ...passed 00:13:47.122 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.122 Test: blockdev nvme admin passthru ...passed 00:13:47.122 Test: blockdev copy ...passed 00:13:47.122 Suite: bdevio tests on: raid1 00:13:47.122 Test: blockdev write read block ...passed 00:13:47.122 Test: blockdev write zeroes read block ...passed 00:13:47.122 Test: blockdev write zeroes read no split ...passed 00:13:47.381 Test: blockdev write zeroes read split ...passed 00:13:47.381 Test: blockdev write zeroes read split partial ...passed 00:13:47.381 Test: blockdev reset ...passed 00:13:47.382 Test: blockdev write read 8 blocks ...passed 00:13:47.382 Test: blockdev write read size > 128k ...passed 00:13:47.382 Test: blockdev write read invalid size ...passed 00:13:47.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.382 Test: blockdev write read max offset ...passed 00:13:47.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.382 Test: blockdev writev readv 8 blocks ...passed 00:13:47.382 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.382 Test: blockdev writev readv block ...passed 00:13:47.382 Test: blockdev writev readv size > 128k ...passed 00:13:47.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.382 Test: blockdev comparev and writev ...passed 00:13:47.382 Test: blockdev nvme passthru rw ...passed 00:13:47.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.382 Test: blockdev nvme admin passthru ...passed 00:13:47.382 Test: blockdev copy ...passed 00:13:47.382 Suite: bdevio tests on: concat0 00:13:47.382 Test: blockdev write read block ...passed 00:13:47.382 Test: blockdev write zeroes read block ...passed 00:13:47.382 Test: blockdev write zeroes read no split ...passed 00:13:47.382 Test: blockdev write zeroes read split ...passed 00:13:47.382 Test: blockdev write zeroes read split partial ...passed 00:13:47.382 Test: blockdev reset ...passed 00:13:47.382 Test: blockdev write read 8 blocks ...passed 00:13:47.382 Test: blockdev write read size > 128k ...passed 00:13:47.382 Test: blockdev write read invalid size ...passed 00:13:47.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.382 Test: blockdev write read max offset ...passed 00:13:47.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.382 Test: blockdev writev readv 8 blocks ...passed 00:13:47.382 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.382 Test: blockdev writev readv block ...passed 00:13:47.382 Test: blockdev writev readv size > 128k ...passed 00:13:47.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.382 Test: blockdev comparev and writev ...passed 00:13:47.382 Test: blockdev nvme passthru rw ...passed 00:13:47.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.382 Test: blockdev nvme admin passthru ...passed 00:13:47.382 Test: blockdev copy ...passed 00:13:47.382 Suite: bdevio tests on: raid0 00:13:47.382 Test: blockdev write read block ...passed 00:13:47.382 Test: blockdev write zeroes read block ...passed 00:13:47.382 Test: blockdev write zeroes read no split ...passed 00:13:47.382 Test: blockdev write zeroes read split ...passed 00:13:47.382 Test: blockdev write zeroes read split partial ...passed 00:13:47.382 Test: blockdev reset ...passed 00:13:47.382 Test: blockdev write read 8 blocks ...passed 00:13:47.382 Test: blockdev write read size > 128k ...passed 00:13:47.382 Test: blockdev write read invalid size ...passed 00:13:47.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.382 Test: blockdev write read max offset ...passed 00:13:47.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.382 Test: blockdev writev readv 8 blocks ...passed 00:13:47.382 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.382 Test: blockdev writev readv block ...passed 00:13:47.382 Test: blockdev writev readv size > 128k ...passed 00:13:47.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.382 Test: blockdev comparev and writev ...passed 00:13:47.382 Test: blockdev nvme passthru rw ...passed 00:13:47.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.382 Test: blockdev nvme admin passthru ...passed 00:13:47.382 Test: blockdev copy ...passed 00:13:47.382 Suite: bdevio tests on: TestPT 00:13:47.382 Test: blockdev write read block ...passed 00:13:47.382 Test: blockdev write zeroes read block ...passed 00:13:47.382 Test: blockdev write zeroes read no split ...passed 00:13:47.382 Test: blockdev write zeroes read split ...passed 00:13:47.382 Test: blockdev write zeroes read split partial ...passed 00:13:47.382 Test: blockdev reset ...passed 00:13:47.641 Test: blockdev write read 8 blocks ...passed 00:13:47.641 Test: blockdev write read size > 128k ...passed 00:13:47.641 Test: blockdev write read invalid size ...passed 00:13:47.641 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.641 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.641 Test: blockdev write read max offset ...passed 00:13:47.641 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.641 Test: blockdev writev readv 8 blocks ...passed 00:13:47.641 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.641 Test: blockdev writev readv block ...passed 00:13:47.641 Test: blockdev writev readv size > 128k ...passed 00:13:47.641 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.641 Test: blockdev comparev and writev ...passed 00:13:47.641 Test: blockdev nvme passthru rw ...passed 00:13:47.641 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.641 Test: blockdev nvme admin passthru ...passed 00:13:47.641 Test: blockdev copy ...passed 00:13:47.641 Suite: bdevio tests on: Malloc2p7 00:13:47.641 Test: blockdev write read block ...passed 00:13:47.641 Test: blockdev write zeroes read block ...passed 00:13:47.641 Test: blockdev write zeroes read no split ...passed 00:13:47.641 Test: blockdev write zeroes read split ...passed 00:13:47.641 Test: blockdev write zeroes read split partial ...passed 00:13:47.641 Test: blockdev reset ...passed 00:13:47.641 Test: blockdev write read 8 blocks ...passed 00:13:47.641 Test: blockdev write read size > 128k ...passed 00:13:47.641 Test: blockdev write read invalid size ...passed 00:13:47.641 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.641 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.641 Test: blockdev write read max offset ...passed 00:13:47.641 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.641 Test: blockdev writev readv 8 blocks ...passed 00:13:47.641 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.641 Test: blockdev writev readv block ...passed 00:13:47.641 Test: blockdev writev readv size > 128k ...passed 00:13:47.641 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.641 Test: blockdev comparev and writev ...passed 00:13:47.641 Test: blockdev nvme passthru rw ...passed 00:13:47.641 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.641 Test: blockdev nvme admin passthru ...passed 00:13:47.641 Test: blockdev copy ...passed 00:13:47.641 Suite: bdevio tests on: Malloc2p6 00:13:47.641 Test: blockdev write read block ...passed 00:13:47.641 Test: blockdev write zeroes read block ...passed 00:13:47.641 Test: blockdev write zeroes read no split ...passed 00:13:47.641 Test: blockdev write zeroes read split ...passed 00:13:47.641 Test: blockdev write zeroes read split partial ...passed 00:13:47.641 Test: blockdev reset ...passed 00:13:47.641 Test: blockdev write read 8 blocks ...passed 00:13:47.641 Test: blockdev write read size > 128k ...passed 00:13:47.641 Test: blockdev write read invalid size ...passed 00:13:47.641 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.641 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.641 Test: blockdev write read max offset ...passed 00:13:47.641 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.641 Test: blockdev writev readv 8 blocks ...passed 00:13:47.641 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.641 Test: blockdev writev readv block ...passed 00:13:47.641 Test: blockdev writev readv size > 128k ...passed 00:13:47.641 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.642 Test: blockdev comparev and writev ...passed 00:13:47.642 Test: blockdev nvme passthru rw ...passed 00:13:47.642 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.642 Test: blockdev nvme admin passthru ...passed 00:13:47.642 Test: blockdev copy ...passed 00:13:47.642 Suite: bdevio tests on: Malloc2p5 00:13:47.642 Test: blockdev write read block ...passed 00:13:47.642 Test: blockdev write zeroes read block ...passed 00:13:47.642 Test: blockdev write zeroes read no split ...passed 00:13:47.642 Test: blockdev write zeroes read split ...passed 00:13:47.642 Test: blockdev write zeroes read split partial ...passed 00:13:47.642 Test: blockdev reset ...passed 00:13:47.642 Test: blockdev write read 8 blocks ...passed 00:13:47.642 Test: blockdev write read size > 128k ...passed 00:13:47.642 Test: blockdev write read invalid size ...passed 00:13:47.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.642 Test: blockdev write read max offset ...passed 00:13:47.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.642 Test: blockdev writev readv 8 blocks ...passed 00:13:47.642 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.642 Test: blockdev writev readv block ...passed 00:13:47.642 Test: blockdev writev readv size > 128k ...passed 00:13:47.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.642 Test: blockdev comparev and writev ...passed 00:13:47.642 Test: blockdev nvme passthru rw ...passed 00:13:47.642 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.642 Test: blockdev nvme admin passthru ...passed 00:13:47.642 Test: blockdev copy ...passed 00:13:47.642 Suite: bdevio tests on: Malloc2p4 00:13:47.642 Test: blockdev write read block ...passed 00:13:47.642 Test: blockdev write zeroes read block ...passed 00:13:47.642 Test: blockdev write zeroes read no split ...passed 00:13:47.642 Test: blockdev write zeroes read split ...passed 00:13:47.906 Test: blockdev write zeroes read split partial ...passed 00:13:47.906 Test: blockdev reset ...passed 00:13:47.906 Test: blockdev write read 8 blocks ...passed 00:13:47.906 Test: blockdev write read size > 128k ...passed 00:13:47.906 Test: blockdev write read invalid size ...passed 00:13:47.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.906 Test: blockdev write read max offset ...passed 00:13:47.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.906 Test: blockdev writev readv 8 blocks ...passed 00:13:47.906 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.906 Test: blockdev writev readv block ...passed 00:13:47.906 Test: blockdev writev readv size > 128k ...passed 00:13:47.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.906 Test: blockdev comparev and writev ...passed 00:13:47.906 Test: blockdev nvme passthru rw ...passed 00:13:47.906 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.906 Test: blockdev nvme admin passthru ...passed 00:13:47.906 Test: blockdev copy ...passed 00:13:47.906 Suite: bdevio tests on: Malloc2p3 00:13:47.906 Test: blockdev write read block ...passed 00:13:47.906 Test: blockdev write zeroes read block ...passed 00:13:47.906 Test: blockdev write zeroes read no split ...passed 00:13:47.906 Test: blockdev write zeroes read split ...passed 00:13:47.906 Test: blockdev write zeroes read split partial ...passed 00:13:47.906 Test: blockdev reset ...passed 00:13:47.906 Test: blockdev write read 8 blocks ...passed 00:13:47.906 Test: blockdev write read size > 128k ...passed 00:13:47.906 Test: blockdev write read invalid size ...passed 00:13:47.907 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.907 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.907 Test: blockdev write read max offset ...passed 00:13:47.907 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.907 Test: blockdev writev readv 8 blocks ...passed 00:13:47.907 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.907 Test: blockdev writev readv block ...passed 00:13:47.907 Test: blockdev writev readv size > 128k ...passed 00:13:47.907 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.907 Test: blockdev comparev and writev ...passed 00:13:47.907 Test: blockdev nvme passthru rw ...passed 00:13:47.907 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.907 Test: blockdev nvme admin passthru ...passed 00:13:47.907 Test: blockdev copy ...passed 00:13:47.907 Suite: bdevio tests on: Malloc2p2 00:13:47.907 Test: blockdev write read block ...passed 00:13:47.907 Test: blockdev write zeroes read block ...passed 00:13:47.907 Test: blockdev write zeroes read no split ...passed 00:13:47.907 Test: blockdev write zeroes read split ...passed 00:13:47.907 Test: blockdev write zeroes read split partial ...passed 00:13:47.907 Test: blockdev reset ...passed 00:13:47.907 Test: blockdev write read 8 blocks ...passed 00:13:47.907 Test: blockdev write read size > 128k ...passed 00:13:47.907 Test: blockdev write read invalid size ...passed 00:13:47.907 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.907 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.907 Test: blockdev write read max offset ...passed 00:13:47.907 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.907 Test: blockdev writev readv 8 blocks ...passed 00:13:47.907 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.907 Test: blockdev writev readv block ...passed 00:13:47.907 Test: blockdev writev readv size > 128k ...passed 00:13:47.907 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.907 Test: blockdev comparev and writev ...passed 00:13:47.907 Test: blockdev nvme passthru rw ...passed 00:13:47.907 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.907 Test: blockdev nvme admin passthru ...passed 00:13:47.907 Test: blockdev copy ...passed 00:13:47.907 Suite: bdevio tests on: Malloc2p1 00:13:47.907 Test: blockdev write read block ...passed 00:13:47.907 Test: blockdev write zeroes read block ...passed 00:13:47.907 Test: blockdev write zeroes read no split ...passed 00:13:47.907 Test: blockdev write zeroes read split ...passed 00:13:47.907 Test: blockdev write zeroes read split partial ...passed 00:13:47.907 Test: blockdev reset ...passed 00:13:47.907 Test: blockdev write read 8 blocks ...passed 00:13:47.907 Test: blockdev write read size > 128k ...passed 00:13:47.907 Test: blockdev write read invalid size ...passed 00:13:47.907 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.907 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.907 Test: blockdev write read max offset ...passed 00:13:47.907 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.907 Test: blockdev writev readv 8 blocks ...passed 00:13:47.907 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.907 Test: blockdev writev readv block ...passed 00:13:47.907 Test: blockdev writev readv size > 128k ...passed 00:13:47.907 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.907 Test: blockdev comparev and writev ...passed 00:13:47.907 Test: blockdev nvme passthru rw ...passed 00:13:47.907 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.907 Test: blockdev nvme admin passthru ...passed 00:13:47.907 Test: blockdev copy ...passed 00:13:47.907 Suite: bdevio tests on: Malloc2p0 00:13:47.907 Test: blockdev write read block ...passed 00:13:47.907 Test: blockdev write zeroes read block ...passed 00:13:47.907 Test: blockdev write zeroes read no split ...passed 00:13:47.907 Test: blockdev write zeroes read split ...passed 00:13:48.165 Test: blockdev write zeroes read split partial ...passed 00:13:48.165 Test: blockdev reset ...passed 00:13:48.165 Test: blockdev write read 8 blocks ...passed 00:13:48.165 Test: blockdev write read size > 128k ...passed 00:13:48.165 Test: blockdev write read invalid size ...passed 00:13:48.165 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:48.165 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:48.165 Test: blockdev write read max offset ...passed 00:13:48.165 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:48.165 Test: blockdev writev readv 8 blocks ...passed 00:13:48.165 Test: blockdev writev readv 30 x 1block ...passed 00:13:48.165 Test: blockdev writev readv block ...passed 00:13:48.165 Test: blockdev writev readv size > 128k ...passed 00:13:48.165 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:48.165 Test: blockdev comparev and writev ...passed 00:13:48.165 Test: blockdev nvme passthru rw ...passed 00:13:48.165 Test: blockdev nvme passthru vendor specific ...passed 00:13:48.165 Test: blockdev nvme admin passthru ...passed 00:13:48.165 Test: blockdev copy ...passed 00:13:48.165 Suite: bdevio tests on: Malloc1p1 00:13:48.165 Test: blockdev write read block ...passed 00:13:48.165 Test: blockdev write zeroes read block ...passed 00:13:48.165 Test: blockdev write zeroes read no split ...passed 00:13:48.165 Test: blockdev write zeroes read split ...passed 00:13:48.165 Test: blockdev write zeroes read split partial ...passed 00:13:48.165 Test: blockdev reset ...passed 00:13:48.165 Test: blockdev write read 8 blocks ...passed 00:13:48.165 Test: blockdev write read size > 128k ...passed 00:13:48.165 Test: blockdev write read invalid size ...passed 00:13:48.165 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:48.165 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:48.165 Test: blockdev write read max offset ...passed 00:13:48.165 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:48.165 Test: blockdev writev readv 8 blocks ...passed 00:13:48.165 Test: blockdev writev readv 30 x 1block ...passed 00:13:48.165 Test: blockdev writev readv block ...passed 00:13:48.165 Test: blockdev writev readv size > 128k ...passed 00:13:48.165 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:48.165 Test: blockdev comparev and writev ...passed 00:13:48.165 Test: blockdev nvme passthru rw ...passed 00:13:48.165 Test: blockdev nvme passthru vendor specific ...passed 00:13:48.165 Test: blockdev nvme admin passthru ...passed 00:13:48.165 Test: blockdev copy ...passed 00:13:48.165 Suite: bdevio tests on: Malloc1p0 00:13:48.165 Test: blockdev write read block ...passed 00:13:48.165 Test: blockdev write zeroes read block ...passed 00:13:48.165 Test: blockdev write zeroes read no split ...passed 00:13:48.165 Test: blockdev write zeroes read split ...passed 00:13:48.165 Test: blockdev write zeroes read split partial ...passed 00:13:48.165 Test: blockdev reset ...passed 00:13:48.165 Test: blockdev write read 8 blocks ...passed 00:13:48.165 Test: blockdev write read size > 128k ...passed 00:13:48.165 Test: blockdev write read invalid size ...passed 00:13:48.165 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:48.165 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:48.165 Test: blockdev write read max offset ...passed 00:13:48.165 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:48.165 Test: blockdev writev readv 8 blocks ...passed 00:13:48.165 Test: blockdev writev readv 30 x 1block ...passed 00:13:48.165 Test: blockdev writev readv block ...passed 00:13:48.165 Test: blockdev writev readv size > 128k ...passed 00:13:48.165 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:48.165 Test: blockdev comparev and writev ...passed 00:13:48.165 Test: blockdev nvme passthru rw ...passed 00:13:48.165 Test: blockdev nvme passthru vendor specific ...passed 00:13:48.165 Test: blockdev nvme admin passthru ...passed 00:13:48.165 Test: blockdev copy ...passed 00:13:48.165 Suite: bdevio tests on: Malloc0 00:13:48.165 Test: blockdev write read block ...passed 00:13:48.165 Test: blockdev write zeroes read block ...passed 00:13:48.165 Test: blockdev write zeroes read no split ...passed 00:13:48.165 Test: blockdev write zeroes read split ...passed 00:13:48.165 Test: blockdev write zeroes read split partial ...passed 00:13:48.165 Test: blockdev reset ...passed 00:13:48.165 Test: blockdev write read 8 blocks ...passed 00:13:48.165 Test: blockdev write read size > 128k ...passed 00:13:48.165 Test: blockdev write read invalid size ...passed 00:13:48.165 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:48.165 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:48.165 Test: blockdev write read max offset ...passed 00:13:48.165 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:48.165 Test: blockdev writev readv 8 blocks ...passed 00:13:48.165 Test: blockdev writev readv 30 x 1block ...passed 00:13:48.165 Test: blockdev writev readv block ...passed 00:13:48.165 Test: blockdev writev readv size > 128k ...passed 00:13:48.165 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:48.165 Test: blockdev comparev and writev ...passed 00:13:48.165 Test: blockdev nvme passthru rw ...passed 00:13:48.165 Test: blockdev nvme passthru vendor specific ...passed 00:13:48.165 Test: blockdev nvme admin passthru ...passed 00:13:48.165 Test: blockdev copy ...passed 00:13:48.165 00:13:48.165 Run Summary: Type Total Ran Passed Failed Inactive 00:13:48.165 suites 16 16 n/a 0 0 00:13:48.165 tests 368 368 368 0 0 00:13:48.165 asserts 2224 2224 2224 0 n/a 00:13:48.165 00:13:48.165 Elapsed time = 3.275 seconds 00:13:48.424 0 00:13:48.424 00:40:10 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 118049 00:13:48.424 00:40:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 118049 ']' 00:13:48.424 00:40:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 118049 00:13:48.424 00:40:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:13:48.424 00:40:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:48.424 00:40:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118049 00:13:48.424 00:40:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:48.424 killing process with pid 118049 00:13:48.424 00:40:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:48.424 00:40:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118049' 00:13:48.424 00:40:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 118049 00:13:48.424 00:40:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 118049 00:13:50.955 00:40:13 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:50.955 00:13:50.955 real 0m5.265s 00:13:50.955 user 0m13.064s 00:13:50.955 sys 0m0.782s 00:13:50.955 00:40:13 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.955 00:40:13 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:50.955 ************************************ 00:13:50.955 END TEST bdev_bounds 00:13:50.955 ************************************ 00:13:50.955 00:40:13 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:50.955 00:40:13 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:50.955 00:40:13 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.955 00:40:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:50.955 ************************************ 00:13:50.955 START TEST bdev_nbd 00:13:50.955 ************************************ 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=118157 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 118157 /var/tmp/spdk-nbd.sock 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 118157 ']' 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.955 00:40:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:50.955 [2024-07-25 00:40:13.412913] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:13:50.955 [2024-07-25 00:40:13.413072] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.955 [2024-07-25 00:40:13.577575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.213 [2024-07-25 00:40:13.848576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.780 [2024-07-25 00:40:14.320380] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:51.780 [2024-07-25 00:40:14.320486] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:51.780 [2024-07-25 00:40:14.328309] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:51.780 [2024-07-25 00:40:14.328361] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:51.780 [2024-07-25 00:40:14.336353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:51.780 [2024-07-25 00:40:14.336427] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:51.780 [2024-07-25 00:40:14.336463] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:52.039 [2024-07-25 00:40:14.570853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:52.039 [2024-07-25 00:40:14.570971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.039 [2024-07-25 00:40:14.571045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:52.039 [2024-07-25 00:40:14.571098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.039 [2024-07-25 00:40:14.573870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.039 [2024-07-25 00:40:14.573941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:52.606 00:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.606 1+0 records in 00:13:52.606 1+0 records out 00:13:52.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038923 s, 10.5 MB/s 00:13:52.606 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.865 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:52.865 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.865 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:52.865 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:52.865 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:52.865 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:52.865 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:53.124 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.124 1+0 records in 00:13:53.124 1+0 records out 00:13:53.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406534 s, 10.1 MB/s 00:13:53.125 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.125 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:53.125 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.125 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:53.125 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:53.125 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:53.125 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:53.125 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.406 1+0 records in 00:13:53.406 1+0 records out 00:13:53.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592112 s, 6.9 MB/s 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:53.406 00:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.670 1+0 records in 00:13:53.670 1+0 records out 00:13:53.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454615 s, 9.0 MB/s 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:53.670 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.949 1+0 records in 00:13:53.949 1+0 records out 00:13:53.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385898 s, 10.6 MB/s 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.949 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:53.950 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:53.950 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:53.950 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:53.950 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.209 1+0 records in 00:13:54.209 1+0 records out 00:13:54.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469418 s, 8.7 MB/s 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:54.209 00:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.467 1+0 records in 00:13:54.467 1+0 records out 00:13:54.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440883 s, 9.3 MB/s 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:54.467 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:54.468 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:13:54.725 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:13:54.725 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:13:54.725 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:13:54.725 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:13:54.725 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:54.725 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:54.725 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:54.725 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:13:54.725 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:54.726 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:54.726 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:54.726 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.726 1+0 records in 00:13:54.726 1+0 records out 00:13:54.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544567 s, 7.5 MB/s 00:13:54.726 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.726 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:54.726 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.726 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:54.726 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:54.726 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:54.726 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:54.726 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.985 1+0 records in 00:13:54.985 1+0 records out 00:13:54.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689097 s, 5.9 MB/s 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:54.985 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.551 1+0 records in 00:13:55.551 1+0 records out 00:13:55.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618244 s, 6.6 MB/s 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:55.551 00:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.810 1+0 records in 00:13:55.810 1+0 records out 00:13:55.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620719 s, 6.6 MB/s 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:55.810 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:55.811 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:55.811 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.070 1+0 records in 00:13:56.070 1+0 records out 00:13:56.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585857 s, 7.0 MB/s 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:56.070 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.330 1+0 records in 00:13:56.330 1+0 records out 00:13:56.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753487 s, 5.4 MB/s 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:56.330 00:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.590 1+0 records in 00:13:56.590 1+0 records out 00:13:56.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00183706 s, 2.2 MB/s 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:56.590 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:13:56.849 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.850 1+0 records in 00:13:56.850 1+0 records out 00:13:56.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084 s, 4.9 MB/s 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:56.850 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.109 1+0 records in 00:13:57.109 1+0 records out 00:13:57.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120248 s, 3.4 MB/s 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:57.109 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:57.369 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd0", 00:13:57.369 "bdev_name": "Malloc0" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd1", 00:13:57.369 "bdev_name": "Malloc1p0" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd2", 00:13:57.369 "bdev_name": "Malloc1p1" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd3", 00:13:57.369 "bdev_name": "Malloc2p0" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd4", 00:13:57.369 "bdev_name": "Malloc2p1" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd5", 00:13:57.369 "bdev_name": "Malloc2p2" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd6", 00:13:57.369 "bdev_name": "Malloc2p3" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd7", 00:13:57.369 "bdev_name": "Malloc2p4" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd8", 00:13:57.369 "bdev_name": "Malloc2p5" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd9", 00:13:57.369 "bdev_name": "Malloc2p6" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd10", 00:13:57.369 "bdev_name": "Malloc2p7" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd11", 00:13:57.369 "bdev_name": "TestPT" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd12", 00:13:57.369 "bdev_name": "raid0" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd13", 00:13:57.369 "bdev_name": "concat0" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd14", 00:13:57.369 "bdev_name": "raid1" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd15", 00:13:57.369 "bdev_name": "AIO0" 00:13:57.369 } 00:13:57.369 ]' 00:13:57.369 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:57.369 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd0", 00:13:57.369 "bdev_name": "Malloc0" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd1", 00:13:57.369 "bdev_name": "Malloc1p0" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd2", 00:13:57.369 "bdev_name": "Malloc1p1" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd3", 00:13:57.369 "bdev_name": "Malloc2p0" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd4", 00:13:57.369 "bdev_name": "Malloc2p1" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd5", 00:13:57.369 "bdev_name": "Malloc2p2" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd6", 00:13:57.369 "bdev_name": "Malloc2p3" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd7", 00:13:57.369 "bdev_name": "Malloc2p4" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd8", 00:13:57.369 "bdev_name": "Malloc2p5" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd9", 00:13:57.369 "bdev_name": "Malloc2p6" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd10", 00:13:57.369 "bdev_name": "Malloc2p7" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd11", 00:13:57.369 "bdev_name": "TestPT" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd12", 00:13:57.369 "bdev_name": "raid0" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd13", 00:13:57.369 "bdev_name": "concat0" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd14", 00:13:57.369 "bdev_name": "raid1" 00:13:57.369 }, 00:13:57.369 { 00:13:57.369 "nbd_device": "/dev/nbd15", 00:13:57.369 "bdev_name": "AIO0" 00:13:57.369 } 00:13:57.369 ]' 00:13:57.369 00:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:57.369 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:13:57.369 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:57.369 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:13:57.369 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.369 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:57.369 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.369 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:57.628 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:57.628 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:57.628 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:57.628 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.628 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.628 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:57.628 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:57.628 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.628 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.628 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:57.887 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.887 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.887 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.887 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.887 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.887 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.887 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:57.887 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.887 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.887 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:58.146 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:58.146 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:58.146 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:58.146 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.146 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.146 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:58.146 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:58.146 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.146 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.146 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:58.405 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:58.405 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:58.405 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:58.405 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.405 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.405 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:58.405 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:58.405 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.405 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.405 00:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:58.664 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:58.664 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:58.664 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:58.664 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.664 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.664 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:58.664 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:58.664 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.664 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.664 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:58.931 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:58.931 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:58.931 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:58.931 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.931 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.931 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:58.931 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:58.931 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.931 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.931 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:59.205 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:59.205 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:59.205 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:59.205 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.205 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.205 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:59.205 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:59.205 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.205 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.205 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:59.465 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:59.465 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:59.465 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:59.465 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.465 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.465 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:59.465 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:59.465 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.465 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.465 00:40:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:59.724 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:59.724 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:59.724 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:59.724 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.724 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.724 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:59.724 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:59.724 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.724 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.724 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:59.983 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:59.983 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:59.983 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:59.983 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.983 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.983 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:59.983 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:59.983 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.983 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.983 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:00.240 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:00.240 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:00.240 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:00.240 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.240 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.240 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:00.240 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:00.240 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.240 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.240 00:40:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:00.498 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:00.498 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:00.498 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:00.498 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.498 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.498 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:00.498 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:00.498 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.498 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.498 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:00.757 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:00.757 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:00.757 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:00.757 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.757 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.757 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:00.757 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:00.757 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.757 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.757 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:01.015 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:01.015 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:01.015 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:01.015 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.015 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.015 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:01.015 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:01.015 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.015 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.015 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:01.272 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:01.272 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:01.272 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:01.272 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.272 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.272 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:01.272 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:01.272 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.272 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.272 00:40:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:14:01.530 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:14:01.530 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:14:01.530 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:14:01.530 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.530 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.530 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:14:01.530 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:01.530 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.530 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:01.530 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.530 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:01.789 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:02.048 /dev/nbd0 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.048 1+0 records in 00:14:02.048 1+0 records out 00:14:02.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430213 s, 9.5 MB/s 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:02.048 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:14:02.307 /dev/nbd1 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.307 1+0 records in 00:14:02.307 1+0 records out 00:14:02.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472662 s, 8.7 MB/s 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:02.307 00:40:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:14:02.565 /dev/nbd10 00:14:02.565 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.824 1+0 records in 00:14:02.824 1+0 records out 00:14:02.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903176 s, 4.5 MB/s 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:14:02.824 /dev/nbd11 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.824 1+0 records in 00:14:02.824 1+0 records out 00:14:02.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652189 s, 6.3 MB/s 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:02.824 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.083 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:03.083 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:03.083 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.083 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:03.083 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:14:03.342 /dev/nbd12 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.342 1+0 records in 00:14:03.342 1+0 records out 00:14:03.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460197 s, 8.9 MB/s 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:03.342 00:40:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:14:03.601 /dev/nbd13 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.601 1+0 records in 00:14:03.601 1+0 records out 00:14:03.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051103 s, 8.0 MB/s 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:03.601 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:14:03.859 /dev/nbd14 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.859 1+0 records in 00:14:03.859 1+0 records out 00:14:03.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717615 s, 5.7 MB/s 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:03.859 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.860 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:03.860 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:14:04.117 /dev/nbd15 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.117 1+0 records in 00:14:04.117 1+0 records out 00:14:04.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000777181 s, 5.3 MB/s 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:04.117 00:40:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:14:04.375 /dev/nbd2 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.633 1+0 records in 00:14:04.633 1+0 records out 00:14:04.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105395 s, 3.9 MB/s 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:04.633 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:14:04.891 /dev/nbd3 00:14:04.891 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:14:04.891 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:14:04.891 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:14:04.891 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:04.891 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:04.891 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:04.891 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:14:04.891 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:04.891 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:04.891 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:04.891 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.891 1+0 records in 00:14:04.891 1+0 records out 00:14:04.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000779441 s, 5.3 MB/s 00:14:04.892 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.892 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:04.892 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.892 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:04.892 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:04.892 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.892 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:04.892 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:14:05.150 /dev/nbd4 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.150 1+0 records in 00:14:05.150 1+0 records out 00:14:05.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496905 s, 8.2 MB/s 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:05.150 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:14:05.409 /dev/nbd5 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.409 1+0 records in 00:14:05.409 1+0 records out 00:14:05.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681802 s, 6.0 MB/s 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:05.409 00:40:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:14:05.667 /dev/nbd6 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.667 1+0 records in 00:14:05.667 1+0 records out 00:14:05.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552482 s, 7.4 MB/s 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:05.667 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:14:05.934 /dev/nbd7 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.934 1+0 records in 00:14:05.934 1+0 records out 00:14:05.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067391 s, 6.1 MB/s 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:05.934 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:14:06.231 /dev/nbd8 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.231 1+0 records in 00:14:06.231 1+0 records out 00:14:06.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660928 s, 6.2 MB/s 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:06.231 00:40:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:14:06.489 /dev/nbd9 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.489 1+0 records in 00:14:06.489 1+0 records out 00:14:06.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00141065 s, 2.9 MB/s 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:06.489 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:07.057 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:07.057 { 00:14:07.057 "nbd_device": "/dev/nbd0", 00:14:07.057 "bdev_name": "Malloc0" 00:14:07.057 }, 00:14:07.057 { 00:14:07.057 "nbd_device": "/dev/nbd1", 00:14:07.057 "bdev_name": "Malloc1p0" 00:14:07.057 }, 00:14:07.057 { 00:14:07.057 "nbd_device": "/dev/nbd10", 00:14:07.057 "bdev_name": "Malloc1p1" 00:14:07.057 }, 00:14:07.057 { 00:14:07.057 "nbd_device": "/dev/nbd11", 00:14:07.057 "bdev_name": "Malloc2p0" 00:14:07.057 }, 00:14:07.057 { 00:14:07.057 "nbd_device": "/dev/nbd12", 00:14:07.057 "bdev_name": "Malloc2p1" 00:14:07.057 }, 00:14:07.057 { 00:14:07.057 "nbd_device": "/dev/nbd13", 00:14:07.057 "bdev_name": "Malloc2p2" 00:14:07.057 }, 00:14:07.057 { 00:14:07.057 "nbd_device": "/dev/nbd14", 00:14:07.058 "bdev_name": "Malloc2p3" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd15", 00:14:07.058 "bdev_name": "Malloc2p4" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd2", 00:14:07.058 "bdev_name": "Malloc2p5" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd3", 00:14:07.058 "bdev_name": "Malloc2p6" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd4", 00:14:07.058 "bdev_name": "Malloc2p7" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd5", 00:14:07.058 "bdev_name": "TestPT" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd6", 00:14:07.058 "bdev_name": "raid0" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd7", 00:14:07.058 "bdev_name": "concat0" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd8", 00:14:07.058 "bdev_name": "raid1" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd9", 00:14:07.058 "bdev_name": "AIO0" 00:14:07.058 } 00:14:07.058 ]' 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd0", 00:14:07.058 "bdev_name": "Malloc0" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd1", 00:14:07.058 "bdev_name": "Malloc1p0" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd10", 00:14:07.058 "bdev_name": "Malloc1p1" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd11", 00:14:07.058 "bdev_name": "Malloc2p0" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd12", 00:14:07.058 "bdev_name": "Malloc2p1" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd13", 00:14:07.058 "bdev_name": "Malloc2p2" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd14", 00:14:07.058 "bdev_name": "Malloc2p3" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd15", 00:14:07.058 "bdev_name": "Malloc2p4" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd2", 00:14:07.058 "bdev_name": "Malloc2p5" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd3", 00:14:07.058 "bdev_name": "Malloc2p6" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd4", 00:14:07.058 "bdev_name": "Malloc2p7" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd5", 00:14:07.058 "bdev_name": "TestPT" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd6", 00:14:07.058 "bdev_name": "raid0" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd7", 00:14:07.058 "bdev_name": "concat0" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd8", 00:14:07.058 "bdev_name": "raid1" 00:14:07.058 }, 00:14:07.058 { 00:14:07.058 "nbd_device": "/dev/nbd9", 00:14:07.058 "bdev_name": "AIO0" 00:14:07.058 } 00:14:07.058 ]' 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:07.058 /dev/nbd1 00:14:07.058 /dev/nbd10 00:14:07.058 /dev/nbd11 00:14:07.058 /dev/nbd12 00:14:07.058 /dev/nbd13 00:14:07.058 /dev/nbd14 00:14:07.058 /dev/nbd15 00:14:07.058 /dev/nbd2 00:14:07.058 /dev/nbd3 00:14:07.058 /dev/nbd4 00:14:07.058 /dev/nbd5 00:14:07.058 /dev/nbd6 00:14:07.058 /dev/nbd7 00:14:07.058 /dev/nbd8 00:14:07.058 /dev/nbd9' 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:07.058 /dev/nbd1 00:14:07.058 /dev/nbd10 00:14:07.058 /dev/nbd11 00:14:07.058 /dev/nbd12 00:14:07.058 /dev/nbd13 00:14:07.058 /dev/nbd14 00:14:07.058 /dev/nbd15 00:14:07.058 /dev/nbd2 00:14:07.058 /dev/nbd3 00:14:07.058 /dev/nbd4 00:14:07.058 /dev/nbd5 00:14:07.058 /dev/nbd6 00:14:07.058 /dev/nbd7 00:14:07.058 /dev/nbd8 00:14:07.058 /dev/nbd9' 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:07.058 256+0 records in 00:14:07.058 256+0 records out 00:14:07.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00706655 s, 148 MB/s 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:07.058 256+0 records in 00:14:07.058 256+0 records out 00:14:07.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153206 s, 6.8 MB/s 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.058 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:07.317 256+0 records in 00:14:07.317 256+0 records out 00:14:07.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158528 s, 6.6 MB/s 00:14:07.317 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.317 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:07.576 256+0 records in 00:14:07.576 256+0 records out 00:14:07.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156753 s, 6.7 MB/s 00:14:07.576 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.576 00:40:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:07.576 256+0 records in 00:14:07.576 256+0 records out 00:14:07.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162865 s, 6.4 MB/s 00:14:07.576 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.576 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:07.834 256+0 records in 00:14:07.834 256+0 records out 00:14:07.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158142 s, 6.6 MB/s 00:14:07.834 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.834 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:07.834 256+0 records in 00:14:07.834 256+0 records out 00:14:07.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156654 s, 6.7 MB/s 00:14:07.834 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.834 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:14:08.092 256+0 records in 00:14:08.092 256+0 records out 00:14:08.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157406 s, 6.7 MB/s 00:14:08.092 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.092 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:14:08.351 256+0 records in 00:14:08.351 256+0 records out 00:14:08.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156076 s, 6.7 MB/s 00:14:08.351 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.351 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:14:08.351 256+0 records in 00:14:08.351 256+0 records out 00:14:08.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158627 s, 6.6 MB/s 00:14:08.351 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.351 00:40:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:14:08.608 256+0 records in 00:14:08.608 256+0 records out 00:14:08.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156445 s, 6.7 MB/s 00:14:08.608 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.608 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:14:08.866 256+0 records in 00:14:08.866 256+0 records out 00:14:08.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15951 s, 6.6 MB/s 00:14:08.866 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.866 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:14:08.866 256+0 records in 00:14:08.866 256+0 records out 00:14:08.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156445 s, 6.7 MB/s 00:14:08.866 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.866 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:14:09.124 256+0 records in 00:14:09.124 256+0 records out 00:14:09.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157161 s, 6.7 MB/s 00:14:09.124 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:09.124 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:14:09.124 256+0 records in 00:14:09.124 256+0 records out 00:14:09.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160242 s, 6.5 MB/s 00:14:09.382 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:09.382 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:14:09.382 256+0 records in 00:14:09.382 256+0 records out 00:14:09.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162787 s, 6.4 MB/s 00:14:09.382 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:09.382 00:40:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:14:09.640 256+0 records in 00:14:09.641 256+0 records out 00:14:09.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.234748 s, 4.5 MB/s 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.641 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.899 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:10.157 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:10.157 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:10.157 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:10.157 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.157 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.157 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:10.157 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:10.158 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.158 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.158 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:10.416 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:10.416 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:10.416 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:10.416 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.416 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.416 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:10.416 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:10.416 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.416 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.416 00:40:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:10.674 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:10.674 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:10.674 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:10.674 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.674 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.674 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:10.674 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:10.674 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.674 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.674 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:10.932 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:10.932 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:10.932 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:10.932 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.932 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.932 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:10.932 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:10.932 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.932 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.932 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:11.190 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:11.190 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:11.190 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:11.190 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.190 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.190 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:11.190 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:11.190 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.190 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.190 00:40:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:11.458 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:11.458 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:11.458 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:11.458 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.458 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.458 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:11.458 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:11.458 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.458 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.458 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:11.716 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:11.716 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:11.716 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:11.716 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.716 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.716 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:11.716 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:11.716 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.716 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.716 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:14:11.975 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:14:11.975 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:14:11.975 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:14:11.975 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.975 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.975 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:14:11.975 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:11.975 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.975 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.975 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:12.233 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:12.233 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:12.233 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:12.233 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.233 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.233 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:12.233 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.233 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.233 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.233 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:12.492 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:12.492 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:12.492 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:12.492 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.492 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.492 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:12.492 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.492 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.492 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.492 00:40:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:12.750 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:12.751 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:12.751 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.751 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.751 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.009 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:14:13.268 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:14:13.268 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:14:13.268 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:14:13.268 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.268 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.268 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:14:13.268 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:13.268 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.268 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.268 00:40:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:14:13.526 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:14:13.526 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:14:13.526 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:14:13.527 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.527 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.527 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:14:13.527 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:13.527 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.527 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.527 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:14:13.785 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:14:13.785 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:14:13.785 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:14:13.785 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.785 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.785 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:14:13.785 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:13.785 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.785 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:13.785 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.785 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:14.044 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:14.044 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:14.044 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:14.302 00:40:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:14.561 malloc_lvol_verify 00:14:14.561 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:14.819 b050912c-b681-4dd9-9b50-cdd0ef894a35 00:14:14.819 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:15.078 00f006e2-f91b-4357-bea7-f3ba5f84e6ce 00:14:15.078 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:15.336 /dev/nbd0 00:14:15.336 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:15.336 mke2fs 1.46.5 (30-Dec-2021) 00:14:15.336 00:14:15.336 Filesystem too small for a journal 00:14:15.336 Discarding device blocks: 0/1024 done 00:14:15.336 Creating filesystem with 1024 4k blocks and 1024 inodes 00:14:15.336 00:14:15.336 Allocating group tables: 0/1 done 00:14:15.336 Writing inode tables: 0/1 done 00:14:15.336 Writing superblocks and filesystem accounting information: 0/1 done 00:14:15.336 00:14:15.336 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:15.336 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:15.336 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:15.336 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:15.336 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.336 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:15.336 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.336 00:40:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 118157 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 118157 ']' 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 118157 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118157 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118157' 00:14:15.594 killing process with pid 118157 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 118157 00:14:15.594 00:40:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 118157 00:14:18.877 ************************************ 00:14:18.877 END TEST bdev_nbd 00:14:18.877 ************************************ 00:14:18.877 00:40:41 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:18.877 00:14:18.877 real 0m27.801s 00:14:18.877 user 0m35.181s 00:14:18.877 sys 0m11.678s 00:14:18.877 00:40:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.877 00:40:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:18.877 00:40:41 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:18.877 00:40:41 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:14:18.877 00:40:41 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:14:18.877 00:40:41 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:18.877 00:40:41 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:18.877 00:40:41 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.877 00:40:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:18.877 ************************************ 00:14:18.877 START TEST bdev_fio 00:14:18.877 ************************************ 00:14:18.877 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:14:18.877 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:18.877 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:18.878 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local workload=verify 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local bdev_type=AIO 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local env_context= 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local fio_dir=/usr/src/fio 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1289 -- # '[' -z verify ']' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -n '' ']' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # cat 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1311 -- # '[' verify == verify ']' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1312 -- # cat 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1321 -- # '[' AIO == AIO ']' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1322 -- # /usr/src/fio/fio --version 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1322 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # echo serialize_overlap=1 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.878 00:40:41 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:18.878 ************************************ 00:14:18.878 START TEST bdev_fio_rw_verify 00:14:18.878 ************************************ 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local sanitizers 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # shift 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local asan_lib= 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # grep libasan 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # break 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:18.878 00:40:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:19.137 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.137 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.137 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.137 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.137 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.137 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.137 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.138 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.138 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.138 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.138 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.138 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.138 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.138 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.138 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.138 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.138 fio-3.35 00:14:19.138 Starting 16 threads 00:14:31.389 00:14:31.389 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=119354: Thu Jul 25 00:40:53 2024 00:14:31.389 read: IOPS=69.3k, BW=271MiB/s (284MB/s)(2707MiB/10002msec) 00:14:31.389 slat (nsec): min=1910, max=65450k, avg=40606.64, stdev=483439.78 00:14:31.389 clat (usec): min=8, max=72187, avg=333.92, stdev=1447.91 00:14:31.389 lat (usec): min=23, max=72203, avg=374.53, stdev=1526.65 00:14:31.389 clat percentiles (usec): 00:14:31.389 | 50.000th=[ 194], 99.000th=[ 832], 99.900th=[16450], 99.990th=[28443], 00:14:31.389 | 99.999th=[67634] 00:14:31.389 write: IOPS=110k, BW=431MiB/s (452MB/s)(4261MiB/9890msec); 0 zone resets 00:14:31.389 slat (usec): min=7, max=67994, avg=72.88, stdev=748.29 00:14:31.389 clat (usec): min=9, max=68325, avg=430.55, stdev=1695.47 00:14:31.389 lat (usec): min=36, max=68344, avg=503.43, stdev=1852.95 00:14:31.389 clat percentiles (usec): 00:14:31.389 | 50.000th=[ 243], 99.000th=[ 8586], 99.900th=[22676], 99.990th=[39060], 00:14:31.389 | 99.999th=[53216] 00:14:31.389 bw ( KiB/s): min=269405, max=717672, per=98.97%, avg=436590.08, stdev=7629.45, samples=305 00:14:31.389 iops : min=67351, max=179418, avg=109147.20, stdev=1907.37, samples=305 00:14:31.389 lat (usec) : 10=0.01%, 20=0.01%, 50=0.52%, 100=8.61%, 250=51.06% 00:14:31.389 lat (usec) : 500=35.08%, 750=3.09%, 1000=0.36% 00:14:31.389 lat (msec) : 2=0.10%, 4=0.08%, 10=0.22%, 20=0.76%, 50=0.11% 00:14:31.389 lat (msec) : 100=0.01% 00:14:31.389 cpu : usr=55.45%, sys=2.08%, ctx=255207, majf=3, minf=75645 00:14:31.389 IO depths : 1=11.1%, 2=23.6%, 4=52.1%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:31.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.389 complete : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.389 issued rwts: total=693092,1090737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.389 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:31.389 00:14:31.389 Run status group 0 (all jobs): 00:14:31.389 READ: bw=271MiB/s (284MB/s), 271MiB/s-271MiB/s (284MB/s-284MB/s), io=2707MiB (2839MB), run=10002-10002msec 00:14:31.390 WRITE: bw=431MiB/s (452MB/s), 431MiB/s-431MiB/s (452MB/s-452MB/s), io=4261MiB (4468MB), run=9890-9890msec 00:14:33.944 ----------------------------------------------------- 00:14:33.944 Suppressions used: 00:14:33.944 count bytes template 00:14:33.945 16 140 /usr/src/fio/parse.c 00:14:33.945 12049 1156704 /usr/src/fio/iolog.c 00:14:33.945 1 904 libcrypto.so 00:14:33.945 ----------------------------------------------------- 00:14:33.945 00:14:33.945 00:14:33.945 real 0m15.163s 00:14:33.945 user 1m35.789s 00:14:33.945 sys 0m4.357s 00:14:33.945 00:40:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:33.945 00:40:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:33.945 ************************************ 00:14:33.945 END TEST bdev_fio_rw_verify 00:14:33.945 ************************************ 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local workload=trim 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local bdev_type= 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local env_context= 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local fio_dir=/usr/src/fio 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1289 -- # '[' -z trim ']' 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -n '' ']' 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # cat 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1311 -- # '[' trim == verify ']' 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1326 -- # '[' trim == trim ']' 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1327 -- # echo rw=trimwrite 00:14:33.945 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:33.946 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "99111a81-7ddf-4a3c-b43c-14cc76d2a669"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "99111a81-7ddf-4a3c-b43c-14cc76d2a669",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8a914ad3-f0c5-57b6-92c9-b59e212e34ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8a914ad3-f0c5-57b6-92c9-b59e212e34ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "d3d85ed9-95a5-585c-a9c9-74446b461d2d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d3d85ed9-95a5-585c-a9c9-74446b461d2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2c9af94f-eb35-5243-be03-d03df185e573"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2c9af94f-eb35-5243-be03-d03df185e573",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "50641a2f-c3ed-5896-869c-1c582344f85b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "50641a2f-c3ed-5896-869c-1c582344f85b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8eedd088-cf59-50a5-a5c9-456c62d94edd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8eedd088-cf59-50a5-a5c9-456c62d94edd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "c4f40913-4490-574d-97c0-6ddbb4682087"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c4f40913-4490-574d-97c0-6ddbb4682087",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7a991fb7-d70e-501d-989e-9ea68f953ed6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a991fb7-d70e-501d-989e-9ea68f953ed6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "339cc1ec-0b15-51fd-9ab1-3736793224cc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "339cc1ec-0b15-51fd-9ab1-3736793224cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a298d54d-c0c6-556e-a356-c56904c82bca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a298d54d-c0c6-556e-a356-c56904c82bca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "28ffd764-71b6-5844-8330-10b328e7ea73"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "28ffd764-71b6-5844-8330-10b328e7ea73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "d6b7c723-30a8-51b5-997b-6f2e41a80b15"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d6b7c723-30a8-51b5-997b-6f2e41a80b15",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "656ed422-5158-4d11-9df2-fedbefd37638"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "656ed422-5158-4d11-9df2-fedbefd37638",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "656ed422-5158-4d11-9df2-fedbefd37638",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "eb959b9d-7d5a-4524-866e-376fceea1221",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8a3ebfc3-5e6a-45ed-909b-85cca8a41453",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "0c99bb12-beb1-4319-8b92-8e190fe77031"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0c99bb12-beb1-4319-8b92-8e190fe77031",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0c99bb12-beb1-4319-8b92-8e190fe77031",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7dabdf71-fec0-49a3-a9f0-56996efcbd05",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "6d8feb96-4fd9-4b9c-a15a-82352374e35e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "6791834a-bbcc-48b1-8047-42fb6ad87a55"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6791834a-bbcc-48b1-8047-42fb6ad87a55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6791834a-bbcc-48b1-8047-42fb6ad87a55",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "972215ae-6c02-45dd-89f6-befabbef6398",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3780acce-abd8-4db9-9595-58308e0ba922",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "95067689-d695-47e7-b140-512dcda8de2e"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "95067689-d695-47e7-b140-512dcda8de2e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:34.207 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:14:34.207 Malloc1p0 00:14:34.207 Malloc1p1 00:14:34.207 Malloc2p0 00:14:34.207 Malloc2p1 00:14:34.207 Malloc2p2 00:14:34.207 Malloc2p3 00:14:34.207 Malloc2p4 00:14:34.207 Malloc2p5 00:14:34.207 Malloc2p6 00:14:34.207 Malloc2p7 00:14:34.207 TestPT 00:14:34.207 raid0 00:14:34.207 concat0 ]] 00:14:34.207 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "99111a81-7ddf-4a3c-b43c-14cc76d2a669"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "99111a81-7ddf-4a3c-b43c-14cc76d2a669",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8a914ad3-f0c5-57b6-92c9-b59e212e34ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8a914ad3-f0c5-57b6-92c9-b59e212e34ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "d3d85ed9-95a5-585c-a9c9-74446b461d2d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d3d85ed9-95a5-585c-a9c9-74446b461d2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2c9af94f-eb35-5243-be03-d03df185e573"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2c9af94f-eb35-5243-be03-d03df185e573",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "50641a2f-c3ed-5896-869c-1c582344f85b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "50641a2f-c3ed-5896-869c-1c582344f85b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8eedd088-cf59-50a5-a5c9-456c62d94edd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8eedd088-cf59-50a5-a5c9-456c62d94edd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "c4f40913-4490-574d-97c0-6ddbb4682087"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c4f40913-4490-574d-97c0-6ddbb4682087",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7a991fb7-d70e-501d-989e-9ea68f953ed6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a991fb7-d70e-501d-989e-9ea68f953ed6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "339cc1ec-0b15-51fd-9ab1-3736793224cc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "339cc1ec-0b15-51fd-9ab1-3736793224cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a298d54d-c0c6-556e-a356-c56904c82bca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a298d54d-c0c6-556e-a356-c56904c82bca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "28ffd764-71b6-5844-8330-10b328e7ea73"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "28ffd764-71b6-5844-8330-10b328e7ea73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "d6b7c723-30a8-51b5-997b-6f2e41a80b15"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d6b7c723-30a8-51b5-997b-6f2e41a80b15",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "656ed422-5158-4d11-9df2-fedbefd37638"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "656ed422-5158-4d11-9df2-fedbefd37638",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "656ed422-5158-4d11-9df2-fedbefd37638",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "eb959b9d-7d5a-4524-866e-376fceea1221",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8a3ebfc3-5e6a-45ed-909b-85cca8a41453",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "0c99bb12-beb1-4319-8b92-8e190fe77031"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0c99bb12-beb1-4319-8b92-8e190fe77031",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0c99bb12-beb1-4319-8b92-8e190fe77031",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7dabdf71-fec0-49a3-a9f0-56996efcbd05",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "6d8feb96-4fd9-4b9c-a15a-82352374e35e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "6791834a-bbcc-48b1-8047-42fb6ad87a55"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6791834a-bbcc-48b1-8047-42fb6ad87a55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6791834a-bbcc-48b1-8047-42fb6ad87a55",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "972215ae-6c02-45dd-89f6-befabbef6398",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3780acce-abd8-4db9-9595-58308e0ba922",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "95067689-d695-47e7-b140-512dcda8de2e"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "95067689-d695-47e7-b140-512dcda8de2e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.209 00:40:56 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:34.209 ************************************ 00:14:34.209 START TEST bdev_fio_trim 00:14:34.209 ************************************ 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local sanitizers 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # shift 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # local asan_lib= 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # grep libasan 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # break 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:34.209 00:40:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:34.469 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:34.469 fio-3.35 00:14:34.469 Starting 14 threads 00:14:46.722 00:14:46.722 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=119586: Thu Jul 25 00:41:08 2024 00:14:46.722 write: IOPS=106k, BW=413MiB/s (433MB/s)(4132MiB/10001msec); 0 zone resets 00:14:46.722 slat (usec): min=2, max=28047, avg=47.38, stdev=458.63 00:14:46.722 clat (usec): min=14, max=28401, avg=328.47, stdev=1203.09 00:14:46.722 lat (usec): min=37, max=28436, avg=375.85, stdev=1286.95 00:14:46.722 clat percentiles (usec): 00:14:46.722 | 50.000th=[ 221], 99.000th=[ 498], 99.900th=[16319], 99.990th=[20317], 00:14:46.722 | 99.999th=[28181] 00:14:46.722 bw ( KiB/s): min=288760, max=640080, per=100.00%, avg=423105.26, stdev=7914.10, samples=266 00:14:46.722 iops : min=72190, max=160022, avg=105776.26, stdev=1978.53, samples=266 00:14:46.722 trim: IOPS=106k, BW=413MiB/s (433MB/s)(4132MiB/10001msec); 0 zone resets 00:14:46.722 slat (usec): min=4, max=28050, avg=32.76, stdev=373.08 00:14:46.722 clat (usec): min=3, max=28437, avg=371.43, stdev=1278.64 00:14:46.722 lat (usec): min=16, max=28463, avg=404.19, stdev=1331.57 00:14:46.722 clat percentiles (usec): 00:14:46.722 | 50.000th=[ 253], 99.000th=[ 603], 99.900th=[16319], 99.990th=[20579], 00:14:46.722 | 99.999th=[28181] 00:14:46.722 bw ( KiB/s): min=288760, max=640032, per=100.00%, avg=423105.68, stdev=7914.07, samples=266 00:14:46.722 iops : min=72190, max=160008, avg=105776.26, stdev=1978.51, samples=266 00:14:46.722 lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.47%, 100=4.11% 00:14:46.722 lat (usec) : 250=50.57%, 500=43.52%, 750=0.49%, 1000=0.06% 00:14:46.722 lat (msec) : 2=0.02%, 4=0.01%, 10=0.05%, 20=0.68%, 50=0.02% 00:14:46.722 cpu : usr=68.55%, sys=0.45%, ctx=142884, majf=0, minf=772 00:14:46.722 IO depths : 1=12.4%, 2=24.9%, 4=50.1%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.722 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.722 issued rwts: total=0,1057792,1057797,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.722 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:46.722 00:14:46.722 Run status group 0 (all jobs): 00:14:46.722 WRITE: bw=413MiB/s (433MB/s), 413MiB/s-413MiB/s (433MB/s-433MB/s), io=4132MiB (4333MB), run=10001-10001msec 00:14:46.722 TRIM: bw=413MiB/s (433MB/s), 413MiB/s-413MiB/s (433MB/s-433MB/s), io=4132MiB (4333MB), run=10001-10001msec 00:14:49.258 ----------------------------------------------------- 00:14:49.258 Suppressions used: 00:14:49.258 count bytes template 00:14:49.258 14 129 /usr/src/fio/parse.c 00:14:49.258 1 904 libcrypto.so 00:14:49.258 ----------------------------------------------------- 00:14:49.258 00:14:49.258 00:14:49.258 real 0m14.837s 00:14:49.258 user 1m42.007s 00:14:49.258 sys 0m1.594s 00:14:49.258 00:41:11 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.258 ************************************ 00:14:49.258 END TEST bdev_fio_trim 00:14:49.258 ************************************ 00:14:49.258 00:41:11 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:14:49.258 00:41:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:14:49.258 00:41:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:49.258 /home/vagrant/spdk_repo/spdk 00:14:49.258 00:41:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:14:49.258 00:41:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:14:49.258 00:14:49.258 real 0m30.437s 00:14:49.258 user 3m17.990s 00:14:49.258 sys 0m6.165s 00:14:49.258 00:41:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.258 ************************************ 00:14:49.258 END TEST bdev_fio 00:14:49.258 ************************************ 00:14:49.258 00:41:11 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:49.258 00:41:11 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:49.258 00:41:11 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:49.258 00:41:11 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:14:49.258 00:41:11 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.258 00:41:11 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:49.258 ************************************ 00:14:49.258 START TEST bdev_verify 00:14:49.258 ************************************ 00:14:49.258 00:41:11 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:49.258 [2024-07-25 00:41:11.779310] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:14:49.258 [2024-07-25 00:41:11.779588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119779 ] 00:14:49.517 [2024-07-25 00:41:11.977589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:49.775 [2024-07-25 00:41:12.294911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.775 [2024-07-25 00:41:12.294913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.399 [2024-07-25 00:41:12.738917] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:50.399 [2024-07-25 00:41:12.739300] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:50.399 [2024-07-25 00:41:12.746847] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:50.399 [2024-07-25 00:41:12.747051] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:50.399 [2024-07-25 00:41:12.754862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:50.399 [2024-07-25 00:41:12.755134] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:50.400 [2024-07-25 00:41:12.755253] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:50.400 [2024-07-25 00:41:12.987590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:50.400 [2024-07-25 00:41:12.987715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.400 [2024-07-25 00:41:12.987777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:50.400 [2024-07-25 00:41:12.987809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.400 [2024-07-25 00:41:12.991015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.400 [2024-07-25 00:41:12.991084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:50.981 Running I/O for 5 seconds... 00:14:56.254 00:14:56.254 Latency(us) 00:14:56.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.254 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.254 Verification LBA range: start 0x0 length 0x1000 00:14:56.254 Malloc0 : 5.21 1006.34 3.93 0.00 0.00 126886.12 518.83 226692.14 00:14:56.254 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.254 Verification LBA range: start 0x1000 length 0x1000 00:14:56.254 Malloc0 : 5.20 1182.54 4.62 0.00 0.00 107983.45 667.06 365503.63 00:14:56.254 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x800 00:14:56.255 Malloc1p0 : 5.22 515.19 2.01 0.00 0.00 246922.53 3713.71 213709.78 00:14:56.255 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x800 length 0x800 00:14:56.255 Malloc1p0 : 5.20 615.63 2.40 0.00 0.00 206715.58 3183.18 183750.46 00:14:56.255 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x800 00:14:56.255 Malloc1p1 : 5.22 514.94 2.01 0.00 0.00 246375.90 2824.29 213709.78 00:14:56.255 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x800 length 0x800 00:14:56.255 Malloc1p1 : 5.20 615.36 2.40 0.00 0.00 206335.17 2527.82 179755.89 00:14:56.255 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x200 00:14:56.255 Malloc2p0 : 5.22 514.57 2.01 0.00 0.00 245926.08 3432.84 212711.13 00:14:56.255 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x200 length 0x200 00:14:56.255 Malloc2p0 : 5.20 615.10 2.40 0.00 0.00 206005.51 2465.40 178757.24 00:14:56.255 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x200 00:14:56.255 Malloc2p1 : 5.23 513.88 2.01 0.00 0.00 245605.85 5991.86 207717.91 00:14:56.255 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x200 length 0x200 00:14:56.255 Malloc2p1 : 5.20 614.81 2.40 0.00 0.00 205679.96 2886.70 178757.24 00:14:56.255 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x200 00:14:56.255 Malloc2p2 : 5.24 513.15 2.00 0.00 0.00 245027.57 3042.74 203723.34 00:14:56.255 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x200 length 0x200 00:14:56.255 Malloc2p2 : 5.21 614.55 2.40 0.00 0.00 205314.95 4369.07 174762.67 00:14:56.255 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x200 00:14:56.255 Malloc2p3 : 5.24 512.56 2.00 0.00 0.00 244667.22 4275.44 198730.12 00:14:56.255 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x200 length 0x200 00:14:56.255 Malloc2p3 : 5.21 614.28 2.40 0.00 0.00 204896.21 2512.21 173764.02 00:14:56.255 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x200 00:14:56.255 Malloc2p4 : 5.25 511.98 2.00 0.00 0.00 244279.44 5024.43 194735.54 00:14:56.255 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x200 length 0x200 00:14:56.255 Malloc2p4 : 5.21 614.03 2.40 0.00 0.00 204510.18 2559.02 172765.38 00:14:56.255 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x200 00:14:56.255 Malloc2p5 : 5.25 511.82 2.00 0.00 0.00 243529.84 2371.78 194735.54 00:14:56.255 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x200 length 0x200 00:14:56.255 Malloc2p5 : 5.21 613.71 2.40 0.00 0.00 204174.24 3495.25 171766.74 00:14:56.255 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x200 00:14:56.255 Malloc2p6 : 5.25 511.65 2.00 0.00 0.00 243091.86 2793.08 193736.90 00:14:56.255 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x200 length 0x200 00:14:56.255 Malloc2p6 : 5.22 613.48 2.40 0.00 0.00 203785.00 4025.78 167772.16 00:14:56.255 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x200 00:14:56.255 Malloc2p7 : 5.26 511.49 2.00 0.00 0.00 242641.73 3136.37 186746.39 00:14:56.255 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x200 length 0x200 00:14:56.255 Malloc2p7 : 5.22 613.24 2.40 0.00 0.00 203361.39 2543.42 165774.87 00:14:56.255 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x1000 00:14:56.255 TestPT : 5.26 511.33 2.00 0.00 0.00 242150.86 3526.46 179755.89 00:14:56.255 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x1000 length 0x1000 00:14:56.255 TestPT : 5.24 610.21 2.38 0.00 0.00 203775.51 8800.55 165774.87 00:14:56.255 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x2000 00:14:56.255 raid0 : 5.26 511.17 2.00 0.00 0.00 241646.14 1880.26 177758.60 00:14:56.255 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x2000 length 0x2000 00:14:56.255 raid0 : 5.22 612.56 2.39 0.00 0.00 202738.13 2699.46 160781.65 00:14:56.255 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x2000 00:14:56.255 concat0 : 5.26 511.01 2.00 0.00 0.00 241372.31 1919.27 181753.17 00:14:56.255 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x2000 length 0x2000 00:14:56.255 concat0 : 5.23 611.77 2.39 0.00 0.00 202564.64 2543.42 160781.65 00:14:56.255 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x1000 00:14:56.255 raid1 : 5.26 510.84 2.00 0.00 0.00 241109.73 2402.99 184749.10 00:14:56.255 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x1000 length 0x1000 00:14:56.255 raid1 : 5.24 610.94 2.39 0.00 0.00 202380.45 3292.40 160781.65 00:14:56.255 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x0 length 0x4e2 00:14:56.255 AIO0 : 5.26 510.67 1.99 0.00 0.00 240392.27 690.47 203723.34 00:14:56.255 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.255 Verification LBA range: start 0x4e2 length 0x4e2 00:14:56.255 AIO0 : 5.25 609.61 2.38 0.00 0.00 201790.07 6647.22 173764.02 00:14:56.255 =================================================================================================================== 00:14:56.255 Total : 19074.42 74.51 0.00 0.00 210187.08 518.83 365503.63 00:14:59.545 00:14:59.545 real 0m9.821s 00:14:59.545 user 0m16.979s 00:14:59.545 sys 0m0.765s 00:14:59.545 ************************************ 00:14:59.545 END TEST bdev_verify 00:14:59.545 ************************************ 00:14:59.545 00:41:21 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.545 00:41:21 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:59.545 00:41:21 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:59.545 00:41:21 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:14:59.545 00:41:21 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.545 00:41:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:59.545 ************************************ 00:14:59.545 START TEST bdev_verify_big_io 00:14:59.545 ************************************ 00:14:59.545 00:41:21 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:59.545 [2024-07-25 00:41:21.643889] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:14:59.545 [2024-07-25 00:41:21.644072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119919 ] 00:14:59.545 [2024-07-25 00:41:21.816521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:59.545 [2024-07-25 00:41:22.081556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.545 [2024-07-25 00:41:22.081554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.112 [2024-07-25 00:41:22.530879] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:00.112 [2024-07-25 00:41:22.530987] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:00.112 [2024-07-25 00:41:22.538815] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:00.112 [2024-07-25 00:41:22.538885] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:00.112 [2024-07-25 00:41:22.546872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:00.112 [2024-07-25 00:41:22.547011] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:00.112 [2024-07-25 00:41:22.547045] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:00.370 [2024-07-25 00:41:22.776723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:00.370 [2024-07-25 00:41:22.776853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.370 [2024-07-25 00:41:22.776902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:00.370 [2024-07-25 00:41:22.776931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.370 [2024-07-25 00:41:22.780099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.370 [2024-07-25 00:41:22.780160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:00.629 [2024-07-25 00:41:23.195576] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.199781] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.204731] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.209652] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.213362] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.218104] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.222314] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.227045] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.231204] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.236114] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.240283] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.244813] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.248849] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.253946] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.258820] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:15:00.629 [2024-07-25 00:41:23.262897] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:15:00.889 [2024-07-25 00:41:23.367608] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:15:00.889 [2024-07-25 00:41:23.376049] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:15:00.889 Running I/O for 5 seconds... 00:15:09.026 00:15:09.026 Latency(us) 00:15:09.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.026 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x100 00:15:09.026 Malloc0 : 5.97 150.11 9.38 0.00 0.00 835350.42 752.88 2364788.54 00:15:09.026 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x100 length 0x100 00:15:09.026 Malloc0 : 6.00 149.38 9.34 0.00 0.00 837462.91 659.26 2508593.25 00:15:09.026 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x80 00:15:09.026 Malloc1p0 : 6.32 86.76 5.42 0.00 0.00 1351226.68 2683.86 2684354.56 00:15:09.026 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x80 length 0x80 00:15:09.026 Malloc1p0 : 7.00 34.29 2.14 0.00 0.00 3300739.61 1232.70 5560448.73 00:15:09.026 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x80 00:15:09.026 Malloc1p1 : 6.64 36.15 2.26 0.00 0.00 3124434.60 1458.96 5336752.52 00:15:09.026 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x80 length 0x80 00:15:09.026 Malloc1p1 : 7.00 34.28 2.14 0.00 0.00 3181731.50 1513.57 5336752.52 00:15:09.026 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x20 00:15:09.026 Malloc2p0 : 6.23 23.10 1.44 0.00 0.00 1206065.60 1068.86 1973320.17 00:15:09.026 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x20 length 0x20 00:15:09.026 Malloc2p0 : 6.24 23.06 1.44 0.00 0.00 1199074.68 604.65 2037233.37 00:15:09.026 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x20 00:15:09.026 Malloc2p1 : 6.32 25.32 1.58 0.00 0.00 1110371.78 756.78 1949352.72 00:15:09.026 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x20 length 0x20 00:15:09.026 Malloc2p1 : 6.33 25.27 1.58 0.00 0.00 1099845.60 577.34 2005276.77 00:15:09.026 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x20 00:15:09.026 Malloc2p2 : 6.32 25.32 1.58 0.00 0.00 1100604.14 725.58 1917396.11 00:15:09.026 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x20 length 0x20 00:15:09.026 Malloc2p2 : 6.33 25.27 1.58 0.00 0.00 1088810.39 639.76 1973320.17 00:15:09.026 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x20 00:15:09.026 Malloc2p3 : 6.32 25.31 1.58 0.00 0.00 1090533.63 748.98 1893428.66 00:15:09.026 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x20 length 0x20 00:15:09.026 Malloc2p3 : 6.33 25.27 1.58 0.00 0.00 1077314.16 604.65 1941363.57 00:15:09.026 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x20 00:15:09.026 Malloc2p4 : 6.32 25.31 1.58 0.00 0.00 1080567.60 729.48 1869461.21 00:15:09.026 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x20 length 0x20 00:15:09.026 Malloc2p4 : 6.33 25.26 1.58 0.00 0.00 1065959.58 592.94 1909406.96 00:15:09.026 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x20 00:15:09.026 Malloc2p5 : 6.32 25.30 1.58 0.00 0.00 1071333.73 725.58 1837504.61 00:15:09.026 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x20 length 0x20 00:15:09.026 Malloc2p5 : 6.34 25.26 1.58 0.00 0.00 1055456.02 592.94 1877450.36 00:15:09.026 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x20 00:15:09.026 Malloc2p6 : 6.33 25.29 1.58 0.00 0.00 1061797.95 717.78 1813537.16 00:15:09.026 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x20 length 0x20 00:15:09.026 Malloc2p6 : 6.34 25.25 1.58 0.00 0.00 1044080.38 573.44 1837504.61 00:15:09.026 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x20 00:15:09.026 Malloc2p7 : 6.33 25.29 1.58 0.00 0.00 1051920.84 811.40 1789569.71 00:15:09.026 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x20 length 0x20 00:15:09.026 Malloc2p7 : 6.34 25.25 1.58 0.00 0.00 1032710.69 592.94 1797558.86 00:15:09.026 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x100 00:15:09.026 TestPT : 6.76 35.82 2.24 0.00 0.00 2829046.49 106355.57 3978596.94 00:15:09.026 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x100 length 0x100 00:15:09.026 TestPT : 7.03 34.16 2.14 0.00 0.00 2882450.12 63913.20 3866748.83 00:15:09.026 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x200 00:15:09.026 raid0 : 6.95 41.45 2.59 0.00 0.00 2355719.73 1544.78 4729577.08 00:15:09.026 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x200 length 0x200 00:15:09.026 raid0 : 7.03 43.26 2.70 0.00 0.00 2263186.84 1341.93 4601750.67 00:15:09.026 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x200 00:15:09.026 concat0 : 6.76 47.35 2.96 0.00 0.00 2030051.12 1482.36 4537837.47 00:15:09.026 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x200 length 0x200 00:15:09.026 concat0 : 7.01 50.24 3.14 0.00 0.00 1904737.70 3120.76 4378054.46 00:15:09.026 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x100 00:15:09.026 raid1 : 6.95 62.15 3.88 0.00 0.00 1526404.20 4244.24 4346097.86 00:15:09.026 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x100 length 0x100 00:15:09.026 raid1 : 7.03 82.37 5.15 0.00 0.00 1119585.76 1755.43 4218271.45 00:15:09.026 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x0 length 0x4e 00:15:09.026 AIO0 : 6.96 59.24 3.70 0.00 0.00 948720.09 1521.37 2748267.76 00:15:09.026 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:15:09.026 Verification LBA range: start 0x4e length 0x4e 00:15:09.026 AIO0 : 7.06 73.61 4.60 0.00 0.00 741115.67 940.13 2700332.86 00:15:09.026 =================================================================================================================== 00:15:09.026 Total : 1420.75 88.80 0.00 0.00 1449086.62 573.44 5560448.73 00:15:10.930 00:15:10.930 real 0m11.955s 00:15:10.930 user 0m21.917s 00:15:10.930 sys 0m0.669s 00:15:10.930 00:41:33 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.930 ************************************ 00:15:10.930 END TEST bdev_verify_big_io 00:15:10.930 ************************************ 00:15:10.931 00:41:33 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.931 00:41:33 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:10.931 00:41:33 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:10.931 00:41:33 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.931 00:41:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:11.189 ************************************ 00:15:11.189 START TEST bdev_write_zeroes 00:15:11.189 ************************************ 00:15:11.189 00:41:33 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:11.189 [2024-07-25 00:41:33.659185] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:15:11.189 [2024-07-25 00:41:33.659340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120080 ] 00:15:11.189 [2024-07-25 00:41:33.816417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.447 [2024-07-25 00:41:34.016960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.037 [2024-07-25 00:41:34.411984] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:12.037 [2024-07-25 00:41:34.412068] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:12.037 [2024-07-25 00:41:34.419936] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:12.037 [2024-07-25 00:41:34.419983] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:12.037 [2024-07-25 00:41:34.427954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:12.037 [2024-07-25 00:41:34.428022] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:12.037 [2024-07-25 00:41:34.428064] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:12.037 [2024-07-25 00:41:34.625951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:12.037 [2024-07-25 00:41:34.626048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.037 [2024-07-25 00:41:34.626079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:12.037 [2024-07-25 00:41:34.626106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.037 [2024-07-25 00:41:34.628425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.037 [2024-07-25 00:41:34.628481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:12.603 Running I/O for 1 seconds... 00:15:13.541 00:15:13.541 Latency(us) 00:15:13.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.541 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 Malloc0 : 1.03 6239.22 24.37 0.00 0.00 20505.47 522.73 33704.23 00:15:13.541 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 Malloc1p0 : 1.03 6232.12 24.34 0.00 0.00 20502.28 698.27 32955.25 00:15:13.541 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 Malloc1p1 : 1.03 6225.26 24.32 0.00 0.00 20482.08 709.97 32206.26 00:15:13.541 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 Malloc2p0 : 1.03 6218.58 24.29 0.00 0.00 20472.17 690.47 31582.11 00:15:13.541 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 Malloc2p1 : 1.03 6211.98 24.27 0.00 0.00 20457.39 698.27 30957.96 00:15:13.541 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 Malloc2p2 : 1.03 6205.27 24.24 0.00 0.00 20447.42 690.47 30333.81 00:15:13.541 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 Malloc2p3 : 1.03 6198.65 24.21 0.00 0.00 20434.69 702.17 29584.82 00:15:13.541 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 Malloc2p4 : 1.03 6192.09 24.19 0.00 0.00 20426.97 694.37 28960.67 00:15:13.541 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 Malloc2p5 : 1.05 6234.17 24.35 0.00 0.00 20254.13 690.47 28336.52 00:15:13.541 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 Malloc2p6 : 1.05 6227.59 24.33 0.00 0.00 20239.84 713.87 27587.54 00:15:13.541 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 Malloc2p7 : 1.05 6221.09 24.30 0.00 0.00 20227.69 694.37 26963.38 00:15:13.541 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 TestPT : 1.05 6214.53 24.28 0.00 0.00 20211.80 737.28 26214.40 00:15:13.541 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 raid0 : 1.05 6206.83 24.25 0.00 0.00 20194.22 1240.50 24966.10 00:15:13.541 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 concat0 : 1.05 6199.49 24.22 0.00 0.00 20155.30 1232.70 23717.79 00:15:13.541 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 raid1 : 1.05 6190.08 24.18 0.00 0.00 20116.63 1997.29 21720.50 00:15:13.541 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.541 AIO0 : 1.06 6156.93 24.05 0.00 0.00 20152.47 1302.92 21470.84 00:15:13.541 =================================================================================================================== 00:15:13.541 Total : 99373.89 388.18 0.00 0.00 20328.74 522.73 33704.23 00:15:16.078 00:15:16.078 real 0m4.820s 00:15:16.078 user 0m4.285s 00:15:16.078 sys 0m0.345s 00:15:16.078 00:41:38 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.078 00:41:38 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:16.078 ************************************ 00:15:16.078 END TEST bdev_write_zeroes 00:15:16.078 ************************************ 00:15:16.078 00:41:38 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:16.078 00:41:38 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:16.078 00:41:38 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.078 00:41:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:16.078 ************************************ 00:15:16.078 START TEST bdev_json_nonenclosed 00:15:16.078 ************************************ 00:15:16.078 00:41:38 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:16.078 [2024-07-25 00:41:38.582799] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:15:16.078 [2024-07-25 00:41:38.583062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120163 ] 00:15:16.337 [2024-07-25 00:41:38.765086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.337 [2024-07-25 00:41:38.966118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.337 [2024-07-25 00:41:38.966219] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:16.338 [2024-07-25 00:41:38.966284] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:16.338 [2024-07-25 00:41:38.966312] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:16.907 00:15:16.907 real 0m0.904s 00:15:16.907 user 0m0.664s 00:15:16.907 sys 0m0.140s 00:15:16.907 00:41:39 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.907 00:41:39 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:16.907 ************************************ 00:15:16.907 END TEST bdev_json_nonenclosed 00:15:16.907 ************************************ 00:15:16.907 00:41:39 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:16.907 00:41:39 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:16.907 00:41:39 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.907 00:41:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:16.907 ************************************ 00:15:16.907 START TEST bdev_json_nonarray 00:15:16.907 ************************************ 00:15:16.907 00:41:39 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:16.907 [2024-07-25 00:41:39.537075] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:15:16.907 [2024-07-25 00:41:39.537237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120201 ] 00:15:17.166 [2024-07-25 00:41:39.695590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.424 [2024-07-25 00:41:39.891858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.424 [2024-07-25 00:41:39.891966] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:17.424 [2024-07-25 00:41:39.892019] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:17.424 [2024-07-25 00:41:39.892045] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:17.683 00:15:17.683 real 0m0.851s 00:15:17.683 user 0m0.610s 00:15:17.683 sys 0m0.141s 00:15:17.683 ************************************ 00:15:17.683 END TEST bdev_json_nonarray 00:15:17.683 ************************************ 00:15:17.683 00:41:40 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.683 00:41:40 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:17.941 00:41:40 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:15:17.941 00:41:40 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:15:17.941 00:41:40 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:17.941 00:41:40 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.941 00:41:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:17.941 ************************************ 00:15:17.941 START TEST bdev_qos 00:15:17.941 ************************************ 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=120239 00:15:17.942 Process qos testing pid: 120239 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 120239' 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 120239 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 120239 ']' 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.942 00:41:40 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:17.942 [2024-07-25 00:41:40.460114] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:15:17.942 [2024-07-25 00:41:40.460281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120239 ] 00:15:18.201 [2024-07-25 00:41:40.625319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.460 [2024-07-25 00:41:40.906993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:19.028 Malloc_0 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:19.028 [ 00:15:19.028 { 00:15:19.028 "name": "Malloc_0", 00:15:19.028 "aliases": [ 00:15:19.028 "04869963-81cc-47c3-b73a-470a3e81471c" 00:15:19.028 ], 00:15:19.028 "product_name": "Malloc disk", 00:15:19.028 "block_size": 512, 00:15:19.028 "num_blocks": 262144, 00:15:19.028 "uuid": "04869963-81cc-47c3-b73a-470a3e81471c", 00:15:19.028 "assigned_rate_limits": { 00:15:19.028 "rw_ios_per_sec": 0, 00:15:19.028 "rw_mbytes_per_sec": 0, 00:15:19.028 "r_mbytes_per_sec": 0, 00:15:19.028 "w_mbytes_per_sec": 0 00:15:19.028 }, 00:15:19.028 "claimed": false, 00:15:19.028 "zoned": false, 00:15:19.028 "supported_io_types": { 00:15:19.028 "read": true, 00:15:19.028 "write": true, 00:15:19.028 "unmap": true, 00:15:19.028 "flush": true, 00:15:19.028 "reset": true, 00:15:19.028 "nvme_admin": false, 00:15:19.028 "nvme_io": false, 00:15:19.028 "nvme_io_md": false, 00:15:19.028 "write_zeroes": true, 00:15:19.028 "zcopy": true, 00:15:19.028 "get_zone_info": false, 00:15:19.028 "zone_management": false, 00:15:19.028 "zone_append": false, 00:15:19.028 "compare": false, 00:15:19.028 "compare_and_write": false, 00:15:19.028 "abort": true, 00:15:19.028 "seek_hole": false, 00:15:19.028 "seek_data": false, 00:15:19.028 "copy": true, 00:15:19.028 "nvme_iov_md": false 00:15:19.028 }, 00:15:19.028 "memory_domains": [ 00:15:19.028 { 00:15:19.028 "dma_device_id": "system", 00:15:19.028 "dma_device_type": 1 00:15:19.028 }, 00:15:19.028 { 00:15:19.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.028 "dma_device_type": 2 00:15:19.028 } 00:15:19.028 ], 00:15:19.028 "driver_specific": {} 00:15:19.028 } 00:15:19.028 ] 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:19.028 Null_1 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:19.028 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:19.029 [ 00:15:19.029 { 00:15:19.029 "name": "Null_1", 00:15:19.029 "aliases": [ 00:15:19.029 "8b52430e-51cb-4e07-afc9-02aff82d48fa" 00:15:19.029 ], 00:15:19.029 "product_name": "Null disk", 00:15:19.029 "block_size": 512, 00:15:19.029 "num_blocks": 262144, 00:15:19.029 "uuid": "8b52430e-51cb-4e07-afc9-02aff82d48fa", 00:15:19.029 "assigned_rate_limits": { 00:15:19.029 "rw_ios_per_sec": 0, 00:15:19.029 "rw_mbytes_per_sec": 0, 00:15:19.029 "r_mbytes_per_sec": 0, 00:15:19.029 "w_mbytes_per_sec": 0 00:15:19.029 }, 00:15:19.029 "claimed": false, 00:15:19.029 "zoned": false, 00:15:19.029 "supported_io_types": { 00:15:19.029 "read": true, 00:15:19.029 "write": true, 00:15:19.029 "unmap": false, 00:15:19.029 "flush": false, 00:15:19.029 "reset": true, 00:15:19.029 "nvme_admin": false, 00:15:19.029 "nvme_io": false, 00:15:19.029 "nvme_io_md": false, 00:15:19.029 "write_zeroes": true, 00:15:19.029 "zcopy": false, 00:15:19.029 "get_zone_info": false, 00:15:19.029 "zone_management": false, 00:15:19.029 "zone_append": false, 00:15:19.029 "compare": false, 00:15:19.029 "compare_and_write": false, 00:15:19.029 "abort": true, 00:15:19.029 "seek_hole": false, 00:15:19.029 "seek_data": false, 00:15:19.029 "copy": false, 00:15:19.029 "nvme_iov_md": false 00:15:19.029 }, 00:15:19.029 "driver_specific": {} 00:15:19.029 } 00:15:19.029 ] 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:15:19.029 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:15:19.288 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:15:19.288 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:19.288 00:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:15:19.288 Running I/O for 60 seconds... 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 81179.46 324717.83 0.00 0.00 327680.00 0.00 0.00 ' 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=81179.46 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 81179 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=81179 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=20000 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 20000 -gt 1000 ']' 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 20000 Malloc_0 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 20000 IOPS Malloc_0 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.570 00:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:24.570 ************************************ 00:15:24.570 START TEST bdev_qos_iops 00:15:24.570 ************************************ 00:15:24.570 00:41:46 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 20000 IOPS Malloc_0 00:15:24.570 00:41:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=20000 00:15:24.570 00:41:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:15:24.570 00:41:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:15:24.570 00:41:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:15:24.570 00:41:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:15:24.570 00:41:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:15:24.570 00:41:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:15:24.570 00:41:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:24.570 00:41:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 20000.43 80001.73 0.00 0.00 81520.00 0.00 0.00 ' 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=20000.43 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 20000 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=20000 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=18000 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=22000 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 20000 -lt 18000 ']' 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 20000 -gt 22000 ']' 00:15:29.843 00:15:29.843 real 0m5.225s 00:15:29.843 user 0m0.124s 00:15:29.843 sys 0m0.038s 00:15:29.843 ************************************ 00:15:29.843 END TEST bdev_qos_iops 00:15:29.843 ************************************ 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:29.843 00:41:52 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:15:29.843 00:41:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:15:29.843 00:41:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:15:29.843 00:41:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:15:29.843 00:41:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:15:29.843 00:41:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:29.843 00:41:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:15:29.843 00:41:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 30154.33 120617.33 0.00 0.00 122880.00 0.00 0.00 ' 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=122880.00 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 122880 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=122880 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=12 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 12 -lt 2 ']' 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.119 00:41:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:35.119 ************************************ 00:15:35.119 START TEST bdev_qos_bw 00:15:35.119 ************************************ 00:15:35.119 00:41:57 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 12 BANDWIDTH Null_1 00:15:35.119 00:41:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=12 00:15:35.119 00:41:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:15:35.119 00:41:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:15:35.119 00:41:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:15:35.119 00:41:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:15:35.119 00:41:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:15:35.119 00:41:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:35.119 00:41:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:15:35.119 00:41:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 3071.17 12284.66 0.00 0.00 12536.00 0.00 0.00 ' 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=12536.00 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 12536 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=12536 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=12288 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=11059 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=13516 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 12536 -lt 11059 ']' 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 12536 -gt 13516 ']' 00:15:40.393 ************************************ 00:15:40.393 END TEST bdev_qos_bw 00:15:40.393 ************************************ 00:15:40.393 00:15:40.393 real 0m5.241s 00:15:40.393 user 0m0.127s 00:15:40.393 sys 0m0.030s 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:15:40.393 00:42:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:15:40.393 00:42:02 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.393 00:42:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:40.393 00:42:02 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.393 00:42:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:15:40.393 00:42:02 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:40.393 00:42:02 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.393 00:42:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:40.393 ************************************ 00:15:40.393 START TEST bdev_qos_ro_bw 00:15:40.393 ************************************ 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:15:40.393 00:42:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 510.98 2043.91 0.00 0.00 2064.00 0.00 0.00 ' 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2064.00 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2064 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2064 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -lt 1843 ']' 00:15:45.670 ************************************ 00:15:45.670 END TEST bdev_qos_ro_bw 00:15:45.670 ************************************ 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -gt 2252 ']' 00:15:45.670 00:15:45.670 real 0m5.182s 00:15:45.670 user 0m0.121s 00:15:45.670 sys 0m0.033s 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.670 00:42:07 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:15:45.670 00:42:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:15:45.670 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.670 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:46.238 00:15:46.238 Latency(us) 00:15:46.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.238 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:46.238 Malloc_0 : 26.72 27392.05 107.00 0.00 0.00 9256.88 1739.82 503316.48 00:15:46.238 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:46.238 Null_1 : 26.95 28204.59 110.17 0.00 0.00 9058.00 592.94 210713.84 00:15:46.238 =================================================================================================================== 00:15:46.238 Total : 55596.64 217.17 0.00 0.00 9155.57 592.94 503316.48 00:15:46.238 0 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 120239 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 120239 ']' 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 120239 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120239 00:15:46.238 killing process with pid 120239 00:15:46.238 Received shutdown signal, test time was about 26.985986 seconds 00:15:46.238 00:15:46.238 Latency(us) 00:15:46.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.238 =================================================================================================================== 00:15:46.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120239' 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 120239 00:15:46.238 00:42:08 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 120239 00:15:47.613 ************************************ 00:15:47.613 END TEST bdev_qos 00:15:47.613 ************************************ 00:15:47.613 00:42:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:15:47.613 00:15:47.613 real 0m29.863s 00:15:47.613 user 0m30.648s 00:15:47.613 sys 0m0.786s 00:15:47.613 00:42:10 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:47.613 00:42:10 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:47.872 00:42:10 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:15:47.872 00:42:10 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:47.872 00:42:10 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.872 00:42:10 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:47.872 ************************************ 00:15:47.872 START TEST bdev_qd_sampling 00:15:47.872 ************************************ 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=120716 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 120716' 00:15:47.872 Process bdev QD sampling period testing pid: 120716 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 120716 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 120716 ']' 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.872 00:42:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:47.872 [2024-07-25 00:42:10.386546] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:15:47.872 [2024-07-25 00:42:10.386920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120716 ] 00:15:48.136 [2024-07-25 00:42:10.550947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:48.137 [2024-07-25 00:42:10.748758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.137 [2024-07-25 00:42:10.748761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.706 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.706 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:15:48.706 00:42:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:15:48.706 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.706 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:48.964 Malloc_QD 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:48.965 [ 00:15:48.965 { 00:15:48.965 "name": "Malloc_QD", 00:15:48.965 "aliases": [ 00:15:48.965 "4f11842d-8fbd-49c9-a49d-ba8f2ddf6830" 00:15:48.965 ], 00:15:48.965 "product_name": "Malloc disk", 00:15:48.965 "block_size": 512, 00:15:48.965 "num_blocks": 262144, 00:15:48.965 "uuid": "4f11842d-8fbd-49c9-a49d-ba8f2ddf6830", 00:15:48.965 "assigned_rate_limits": { 00:15:48.965 "rw_ios_per_sec": 0, 00:15:48.965 "rw_mbytes_per_sec": 0, 00:15:48.965 "r_mbytes_per_sec": 0, 00:15:48.965 "w_mbytes_per_sec": 0 00:15:48.965 }, 00:15:48.965 "claimed": false, 00:15:48.965 "zoned": false, 00:15:48.965 "supported_io_types": { 00:15:48.965 "read": true, 00:15:48.965 "write": true, 00:15:48.965 "unmap": true, 00:15:48.965 "flush": true, 00:15:48.965 "reset": true, 00:15:48.965 "nvme_admin": false, 00:15:48.965 "nvme_io": false, 00:15:48.965 "nvme_io_md": false, 00:15:48.965 "write_zeroes": true, 00:15:48.965 "zcopy": true, 00:15:48.965 "get_zone_info": false, 00:15:48.965 "zone_management": false, 00:15:48.965 "zone_append": false, 00:15:48.965 "compare": false, 00:15:48.965 "compare_and_write": false, 00:15:48.965 "abort": true, 00:15:48.965 "seek_hole": false, 00:15:48.965 "seek_data": false, 00:15:48.965 "copy": true, 00:15:48.965 "nvme_iov_md": false 00:15:48.965 }, 00:15:48.965 "memory_domains": [ 00:15:48.965 { 00:15:48.965 "dma_device_id": "system", 00:15:48.965 "dma_device_type": 1 00:15:48.965 }, 00:15:48.965 { 00:15:48.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.965 "dma_device_type": 2 00:15:48.965 } 00:15:48.965 ], 00:15:48.965 "driver_specific": {} 00:15:48.965 } 00:15:48.965 ] 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:15:48.965 00:42:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:48.965 Running I/O for 5 seconds... 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:15:50.879 "tick_rate": 2100000000, 00:15:50.879 "ticks": 1851247204130, 00:15:50.879 "bdevs": [ 00:15:50.879 { 00:15:50.879 "name": "Malloc_QD", 00:15:50.879 "bytes_read": 920687104, 00:15:50.879 "num_read_ops": 224771, 00:15:50.879 "bytes_written": 0, 00:15:50.879 "num_write_ops": 0, 00:15:50.879 "bytes_unmapped": 0, 00:15:50.879 "num_unmap_ops": 0, 00:15:50.879 "bytes_copied": 0, 00:15:50.879 "num_copy_ops": 0, 00:15:50.879 "read_latency_ticks": 2088361270134, 00:15:50.879 "max_read_latency_ticks": 10431480, 00:15:50.879 "min_read_latency_ticks": 320236, 00:15:50.879 "write_latency_ticks": 0, 00:15:50.879 "max_write_latency_ticks": 0, 00:15:50.879 "min_write_latency_ticks": 0, 00:15:50.879 "unmap_latency_ticks": 0, 00:15:50.879 "max_unmap_latency_ticks": 0, 00:15:50.879 "min_unmap_latency_ticks": 0, 00:15:50.879 "copy_latency_ticks": 0, 00:15:50.879 "max_copy_latency_ticks": 0, 00:15:50.879 "min_copy_latency_ticks": 0, 00:15:50.879 "io_error": {}, 00:15:50.879 "queue_depth_polling_period": 10, 00:15:50.879 "queue_depth": 512, 00:15:50.879 "io_time": 30, 00:15:50.879 "weighted_io_time": 15360 00:15:50.879 } 00:15:50.879 ] 00:15:50.879 }' 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.879 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:50.879 00:15:50.879 Latency(us) 00:15:50.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.879 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:50.879 Malloc_QD : 2.02 57671.59 225.28 0.00 0.00 4428.41 1045.46 4993.22 00:15:50.879 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:50.879 Malloc_QD : 2.02 57891.70 226.14 0.00 0.00 4411.75 713.87 4868.39 00:15:50.879 =================================================================================================================== 00:15:50.879 Total : 115563.29 451.42 0.00 0.00 4420.06 713.87 4993.22 00:15:51.140 0 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 120716 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 120716 ']' 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 120716 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120716 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120716' 00:15:51.140 killing process with pid 120716 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 120716 00:15:51.140 Received shutdown signal, test time was about 2.189897 seconds 00:15:51.140 00:15:51.140 Latency(us) 00:15:51.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.140 =================================================================================================================== 00:15:51.140 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:51.140 00:42:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 120716 00:15:53.042 ************************************ 00:15:53.042 END TEST bdev_qd_sampling 00:15:53.042 ************************************ 00:15:53.043 00:42:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:15:53.043 00:15:53.043 real 0m4.837s 00:15:53.043 user 0m8.929s 00:15:53.043 sys 0m0.375s 00:15:53.043 00:42:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.043 00:42:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:53.043 00:42:15 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:15:53.043 00:42:15 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.043 00:42:15 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.043 00:42:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:53.043 ************************************ 00:15:53.043 START TEST bdev_error 00:15:53.043 ************************************ 00:15:53.043 00:42:15 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:15:53.043 00:42:15 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:15:53.043 00:42:15 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:15:53.043 00:42:15 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:15:53.043 00:42:15 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=120813 00:15:53.043 00:42:15 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 120813' 00:15:53.043 00:42:15 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:15:53.043 Process error testing pid: 120813 00:15:53.043 00:42:15 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 120813 00:15:53.043 00:42:15 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 120813 ']' 00:15:53.043 00:42:15 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.043 00:42:15 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.043 00:42:15 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.043 00:42:15 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.043 00:42:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:53.043 [2024-07-25 00:42:15.321738] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:15:53.043 [2024-07-25 00:42:15.321971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120813 ] 00:15:53.043 [2024-07-25 00:42:15.500628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.302 [2024-07-25 00:42:15.698111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.560 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.560 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:15:53.560 00:42:16 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:53.560 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.560 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:53.820 Dev_1 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.820 00:42:16 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:53.820 [ 00:15:53.820 { 00:15:53.820 "name": "Dev_1", 00:15:53.820 "aliases": [ 00:15:53.820 "a7515e5e-d6e4-47b4-bc36-aebe9c11f1ae" 00:15:53.820 ], 00:15:53.820 "product_name": "Malloc disk", 00:15:53.820 "block_size": 512, 00:15:53.820 "num_blocks": 262144, 00:15:53.820 "uuid": "a7515e5e-d6e4-47b4-bc36-aebe9c11f1ae", 00:15:53.820 "assigned_rate_limits": { 00:15:53.820 "rw_ios_per_sec": 0, 00:15:53.820 "rw_mbytes_per_sec": 0, 00:15:53.820 "r_mbytes_per_sec": 0, 00:15:53.820 "w_mbytes_per_sec": 0 00:15:53.820 }, 00:15:53.820 "claimed": false, 00:15:53.820 "zoned": false, 00:15:53.820 "supported_io_types": { 00:15:53.820 "read": true, 00:15:53.820 "write": true, 00:15:53.820 "unmap": true, 00:15:53.820 "flush": true, 00:15:53.820 "reset": true, 00:15:53.820 "nvme_admin": false, 00:15:53.820 "nvme_io": false, 00:15:53.820 "nvme_io_md": false, 00:15:53.820 "write_zeroes": true, 00:15:53.820 "zcopy": true, 00:15:53.820 "get_zone_info": false, 00:15:53.820 "zone_management": false, 00:15:53.820 "zone_append": false, 00:15:53.820 "compare": false, 00:15:53.820 "compare_and_write": false, 00:15:53.820 "abort": true, 00:15:53.820 "seek_hole": false, 00:15:53.820 "seek_data": false, 00:15:53.820 "copy": true, 00:15:53.820 "nvme_iov_md": false 00:15:53.820 }, 00:15:53.820 "memory_domains": [ 00:15:53.820 { 00:15:53.820 "dma_device_id": "system", 00:15:53.820 "dma_device_type": 1 00:15:53.820 }, 00:15:53.820 { 00:15:53.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.820 "dma_device_type": 2 00:15:53.820 } 00:15:53.820 ], 00:15:53.820 "driver_specific": {} 00:15:53.820 } 00:15:53.820 ] 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:15:53.820 00:42:16 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:53.820 true 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.820 00:42:16 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.820 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:54.079 Dev_2 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.079 00:42:16 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:54.079 [ 00:15:54.079 { 00:15:54.079 "name": "Dev_2", 00:15:54.079 "aliases": [ 00:15:54.079 "6d60f377-ec65-473b-aa45-fbcb684b67d3" 00:15:54.079 ], 00:15:54.079 "product_name": "Malloc disk", 00:15:54.079 "block_size": 512, 00:15:54.079 "num_blocks": 262144, 00:15:54.079 "uuid": "6d60f377-ec65-473b-aa45-fbcb684b67d3", 00:15:54.079 "assigned_rate_limits": { 00:15:54.079 "rw_ios_per_sec": 0, 00:15:54.079 "rw_mbytes_per_sec": 0, 00:15:54.079 "r_mbytes_per_sec": 0, 00:15:54.079 "w_mbytes_per_sec": 0 00:15:54.079 }, 00:15:54.079 "claimed": false, 00:15:54.079 "zoned": false, 00:15:54.079 "supported_io_types": { 00:15:54.079 "read": true, 00:15:54.079 "write": true, 00:15:54.079 "unmap": true, 00:15:54.079 "flush": true, 00:15:54.079 "reset": true, 00:15:54.079 "nvme_admin": false, 00:15:54.079 "nvme_io": false, 00:15:54.079 "nvme_io_md": false, 00:15:54.079 "write_zeroes": true, 00:15:54.079 "zcopy": true, 00:15:54.079 "get_zone_info": false, 00:15:54.079 "zone_management": false, 00:15:54.079 "zone_append": false, 00:15:54.079 "compare": false, 00:15:54.079 "compare_and_write": false, 00:15:54.079 "abort": true, 00:15:54.079 "seek_hole": false, 00:15:54.079 "seek_data": false, 00:15:54.079 "copy": true, 00:15:54.079 "nvme_iov_md": false 00:15:54.079 }, 00:15:54.079 "memory_domains": [ 00:15:54.079 { 00:15:54.079 "dma_device_id": "system", 00:15:54.079 "dma_device_type": 1 00:15:54.079 }, 00:15:54.079 { 00:15:54.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.079 "dma_device_type": 2 00:15:54.079 } 00:15:54.079 ], 00:15:54.079 "driver_specific": {} 00:15:54.079 } 00:15:54.079 ] 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:15:54.079 00:42:16 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:54.079 00:42:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.079 00:42:16 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:15:54.079 00:42:16 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:54.079 Running I/O for 5 seconds... 00:15:55.016 Process is existed as continue on error is set. Pid: 120813 00:15:55.016 00:42:17 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 120813 00:15:55.016 00:42:17 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 120813' 00:15:55.016 00:42:17 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:15:55.016 00:42:17 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.016 00:42:17 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:55.016 00:42:17 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.016 00:42:17 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:15:55.016 00:42:17 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.016 00:42:17 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:55.274 Timeout while waiting for response: 00:15:55.274 00:15:55.274 00:15:55.533 00:42:17 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.533 00:42:17 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:15:59.719 00:15:59.719 Latency(us) 00:15:59.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.719 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:59.719 EE_Dev_1 : 0.90 48697.53 190.22 5.55 0.00 326.04 115.08 589.04 00:15:59.719 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:59.720 Dev_2 : 5.00 96998.36 378.90 0.00 0.00 162.47 51.93 347528.05 00:15:59.720 =================================================================================================================== 00:15:59.720 Total : 145695.89 569.12 5.55 0.00 176.05 51.93 347528.05 00:16:00.656 00:42:22 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 120813 00:16:00.656 00:42:22 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 120813 ']' 00:16:00.656 00:42:22 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 120813 00:16:00.656 00:42:22 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:16:00.656 00:42:22 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.656 00:42:22 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120813 00:16:00.656 killing process with pid 120813 00:16:00.656 Received shutdown signal, test time was about 5.000000 seconds 00:16:00.656 00:16:00.656 Latency(us) 00:16:00.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.656 =================================================================================================================== 00:16:00.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:00.656 00:42:22 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:00.656 00:42:22 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:00.656 00:42:22 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120813' 00:16:00.656 00:42:22 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 120813 00:16:00.656 00:42:22 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 120813 00:16:02.034 Process error testing pid: 120933 00:16:02.035 00:42:24 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=120933 00:16:02.035 00:42:24 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:16:02.035 00:42:24 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 120933' 00:16:02.035 00:42:24 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 120933 00:16:02.035 00:42:24 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 120933 ']' 00:16:02.035 00:42:24 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.035 00:42:24 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.035 00:42:24 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.035 00:42:24 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.035 00:42:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:02.035 [2024-07-25 00:42:24.622549] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:16:02.035 [2024-07-25 00:42:24.622706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120933 ] 00:16:02.294 [2024-07-25 00:42:24.777754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.553 [2024-07-25 00:42:24.973576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:16:03.122 00:42:25 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:03.122 Dev_1 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.122 00:42:25 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:03.122 [ 00:16:03.122 { 00:16:03.122 "name": "Dev_1", 00:16:03.122 "aliases": [ 00:16:03.122 "00f4df78-a375-40b3-b1a5-fe76b0a0e614" 00:16:03.122 ], 00:16:03.122 "product_name": "Malloc disk", 00:16:03.122 "block_size": 512, 00:16:03.122 "num_blocks": 262144, 00:16:03.122 "uuid": "00f4df78-a375-40b3-b1a5-fe76b0a0e614", 00:16:03.122 "assigned_rate_limits": { 00:16:03.122 "rw_ios_per_sec": 0, 00:16:03.122 "rw_mbytes_per_sec": 0, 00:16:03.122 "r_mbytes_per_sec": 0, 00:16:03.122 "w_mbytes_per_sec": 0 00:16:03.122 }, 00:16:03.122 "claimed": false, 00:16:03.122 "zoned": false, 00:16:03.122 "supported_io_types": { 00:16:03.122 "read": true, 00:16:03.122 "write": true, 00:16:03.122 "unmap": true, 00:16:03.122 "flush": true, 00:16:03.122 "reset": true, 00:16:03.122 "nvme_admin": false, 00:16:03.122 "nvme_io": false, 00:16:03.122 "nvme_io_md": false, 00:16:03.122 "write_zeroes": true, 00:16:03.122 "zcopy": true, 00:16:03.122 "get_zone_info": false, 00:16:03.122 "zone_management": false, 00:16:03.122 "zone_append": false, 00:16:03.122 "compare": false, 00:16:03.122 "compare_and_write": false, 00:16:03.122 "abort": true, 00:16:03.122 "seek_hole": false, 00:16:03.122 "seek_data": false, 00:16:03.122 "copy": true, 00:16:03.122 "nvme_iov_md": false 00:16:03.122 }, 00:16:03.122 "memory_domains": [ 00:16:03.122 { 00:16:03.122 "dma_device_id": "system", 00:16:03.122 "dma_device_type": 1 00:16:03.122 }, 00:16:03.122 { 00:16:03.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.122 "dma_device_type": 2 00:16:03.122 } 00:16:03.122 ], 00:16:03.122 "driver_specific": {} 00:16:03.122 } 00:16:03.122 ] 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:16:03.122 00:42:25 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:03.122 true 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.122 00:42:25 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.122 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:03.382 Dev_2 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.382 00:42:25 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:03.382 [ 00:16:03.382 { 00:16:03.382 "name": "Dev_2", 00:16:03.382 "aliases": [ 00:16:03.382 "169366f5-ec9d-40c1-b684-13d9b0ac6c31" 00:16:03.382 ], 00:16:03.382 "product_name": "Malloc disk", 00:16:03.382 "block_size": 512, 00:16:03.382 "num_blocks": 262144, 00:16:03.382 "uuid": "169366f5-ec9d-40c1-b684-13d9b0ac6c31", 00:16:03.382 "assigned_rate_limits": { 00:16:03.382 "rw_ios_per_sec": 0, 00:16:03.382 "rw_mbytes_per_sec": 0, 00:16:03.382 "r_mbytes_per_sec": 0, 00:16:03.382 "w_mbytes_per_sec": 0 00:16:03.382 }, 00:16:03.382 "claimed": false, 00:16:03.382 "zoned": false, 00:16:03.382 "supported_io_types": { 00:16:03.382 "read": true, 00:16:03.382 "write": true, 00:16:03.382 "unmap": true, 00:16:03.382 "flush": true, 00:16:03.382 "reset": true, 00:16:03.382 "nvme_admin": false, 00:16:03.382 "nvme_io": false, 00:16:03.382 "nvme_io_md": false, 00:16:03.382 "write_zeroes": true, 00:16:03.382 "zcopy": true, 00:16:03.382 "get_zone_info": false, 00:16:03.382 "zone_management": false, 00:16:03.382 "zone_append": false, 00:16:03.382 "compare": false, 00:16:03.382 "compare_and_write": false, 00:16:03.382 "abort": true, 00:16:03.382 "seek_hole": false, 00:16:03.382 "seek_data": false, 00:16:03.382 "copy": true, 00:16:03.382 "nvme_iov_md": false 00:16:03.382 }, 00:16:03.382 "memory_domains": [ 00:16:03.382 { 00:16:03.382 "dma_device_id": "system", 00:16:03.382 "dma_device_type": 1 00:16:03.382 }, 00:16:03.382 { 00:16:03.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.382 "dma_device_type": 2 00:16:03.382 } 00:16:03.382 ], 00:16:03.382 "driver_specific": {} 00:16:03.382 } 00:16:03.382 ] 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:16:03.382 00:42:25 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:03.382 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.383 00:42:25 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 120933 00:16:03.383 00:42:25 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:16:03.383 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:16:03.383 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 120933 00:16:03.383 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:16:03.383 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:03.383 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:16:03.383 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:03.383 00:42:25 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 120933 00:16:03.383 Running I/O for 5 seconds... 00:16:03.383 task offset: 170968 on job bdev=EE_Dev_1 fails 00:16:03.383 00:16:03.383 Latency(us) 00:16:03.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.383 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:03.383 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:16:03.383 EE_Dev_1 : 0.00 34591.19 135.12 7861.64 0.00 301.64 114.10 546.13 00:16:03.383 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:03.383 Dev_2 : 0.00 23238.93 90.78 0.00 0.00 489.99 111.66 901.12 00:16:03.383 =================================================================================================================== 00:16:03.383 Total : 57830.12 225.90 7861.64 0.00 403.79 111.66 901.12 00:16:03.383 [2024-07-25 00:42:25.950479] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:03.383 request: 00:16:03.383 { 00:16:03.383 "method": "perform_tests", 00:16:03.383 "req_id": 1 00:16:03.383 } 00:16:03.383 Got JSON-RPC error response 00:16:03.383 response: 00:16:03.383 { 00:16:03.383 "code": -32603, 00:16:03.383 "message": "bdevperf failed with error Operation not permitted" 00:16:03.383 } 00:16:05.290 00:42:27 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:16:05.290 00:42:27 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:05.290 00:42:27 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:16:05.290 00:42:27 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:16:05.290 00:42:27 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:16:05.290 00:42:27 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:05.290 00:16:05.290 real 0m12.640s 00:16:05.290 user 0m12.742s 00:16:05.290 sys 0m0.863s 00:16:05.290 00:42:27 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:05.290 00:42:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:16:05.290 ************************************ 00:16:05.290 END TEST bdev_error 00:16:05.290 ************************************ 00:16:05.290 00:42:27 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:16:05.290 00:42:27 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:05.290 00:42:27 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.290 00:42:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:16:05.551 ************************************ 00:16:05.551 START TEST bdev_stat 00:16:05.551 ************************************ 00:16:05.551 00:42:27 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:16:05.551 00:42:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:16:05.551 00:42:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=120998 00:16:05.551 Process Bdev IO statistics testing pid: 120998 00:16:05.551 00:42:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 120998' 00:16:05.551 00:42:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:16:05.552 00:42:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:16:05.552 00:42:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 120998 00:16:05.552 00:42:27 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 120998 ']' 00:16:05.552 00:42:27 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.552 00:42:27 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.552 00:42:27 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.552 00:42:27 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.552 00:42:27 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:05.552 [2024-07-25 00:42:28.012402] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:16:05.552 [2024-07-25 00:42:28.012581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120998 ] 00:16:05.552 [2024-07-25 00:42:28.175396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:05.818 [2024-07-25 00:42:28.376542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.818 [2024-07-25 00:42:28.376545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.387 00:42:28 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.387 00:42:28 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:16:06.387 00:42:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:16:06.387 00:42:28 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.387 00:42:28 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:06.646 Malloc_STAT 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:06.646 [ 00:16:06.646 { 00:16:06.646 "name": "Malloc_STAT", 00:16:06.646 "aliases": [ 00:16:06.646 "5fbe7799-87a1-4761-8071-f2212f06c79a" 00:16:06.646 ], 00:16:06.646 "product_name": "Malloc disk", 00:16:06.646 "block_size": 512, 00:16:06.646 "num_blocks": 262144, 00:16:06.646 "uuid": "5fbe7799-87a1-4761-8071-f2212f06c79a", 00:16:06.646 "assigned_rate_limits": { 00:16:06.646 "rw_ios_per_sec": 0, 00:16:06.646 "rw_mbytes_per_sec": 0, 00:16:06.646 "r_mbytes_per_sec": 0, 00:16:06.646 "w_mbytes_per_sec": 0 00:16:06.646 }, 00:16:06.646 "claimed": false, 00:16:06.646 "zoned": false, 00:16:06.646 "supported_io_types": { 00:16:06.646 "read": true, 00:16:06.646 "write": true, 00:16:06.646 "unmap": true, 00:16:06.646 "flush": true, 00:16:06.646 "reset": true, 00:16:06.646 "nvme_admin": false, 00:16:06.646 "nvme_io": false, 00:16:06.646 "nvme_io_md": false, 00:16:06.646 "write_zeroes": true, 00:16:06.646 "zcopy": true, 00:16:06.646 "get_zone_info": false, 00:16:06.646 "zone_management": false, 00:16:06.646 "zone_append": false, 00:16:06.646 "compare": false, 00:16:06.646 "compare_and_write": false, 00:16:06.646 "abort": true, 00:16:06.646 "seek_hole": false, 00:16:06.646 "seek_data": false, 00:16:06.646 "copy": true, 00:16:06.646 "nvme_iov_md": false 00:16:06.646 }, 00:16:06.646 "memory_domains": [ 00:16:06.646 { 00:16:06.646 "dma_device_id": "system", 00:16:06.646 "dma_device_type": 1 00:16:06.646 }, 00:16:06.646 { 00:16:06.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.646 "dma_device_type": 2 00:16:06.646 } 00:16:06.646 ], 00:16:06.646 "driver_specific": {} 00:16:06.646 } 00:16:06.646 ] 00:16:06.646 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.647 00:42:29 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:16:06.647 00:42:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:16:06.647 00:42:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:06.647 Running I/O for 10 seconds... 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.553 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:16:08.553 "tick_rate": 2100000000, 00:16:08.553 "ticks": 1888381838598, 00:16:08.553 "bdevs": [ 00:16:08.553 { 00:16:08.553 "name": "Malloc_STAT", 00:16:08.553 "bytes_read": 911249920, 00:16:08.553 "num_read_ops": 222467, 00:16:08.553 "bytes_written": 0, 00:16:08.553 "num_write_ops": 0, 00:16:08.553 "bytes_unmapped": 0, 00:16:08.553 "num_unmap_ops": 0, 00:16:08.553 "bytes_copied": 0, 00:16:08.553 "num_copy_ops": 0, 00:16:08.553 "read_latency_ticks": 2072200376766, 00:16:08.553 "max_read_latency_ticks": 20629210, 00:16:08.553 "min_read_latency_ticks": 254980, 00:16:08.553 "write_latency_ticks": 0, 00:16:08.553 "max_write_latency_ticks": 0, 00:16:08.553 "min_write_latency_ticks": 0, 00:16:08.553 "unmap_latency_ticks": 0, 00:16:08.553 "max_unmap_latency_ticks": 0, 00:16:08.553 "min_unmap_latency_ticks": 0, 00:16:08.553 "copy_latency_ticks": 0, 00:16:08.553 "max_copy_latency_ticks": 0, 00:16:08.553 "min_copy_latency_ticks": 0, 00:16:08.554 "io_error": {} 00:16:08.554 } 00:16:08.554 ] 00:16:08.554 }' 00:16:08.554 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:16:08.554 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=222467 00:16:08.554 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:16:08.554 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.554 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:08.554 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.554 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:16:08.554 "tick_rate": 2100000000, 00:16:08.554 "ticks": 1888521591644, 00:16:08.554 "name": "Malloc_STAT", 00:16:08.554 "channels": [ 00:16:08.554 { 00:16:08.554 "thread_id": 2, 00:16:08.554 "bytes_read": 476053504, 00:16:08.554 "num_read_ops": 116224, 00:16:08.554 "bytes_written": 0, 00:16:08.554 "num_write_ops": 0, 00:16:08.554 "bytes_unmapped": 0, 00:16:08.554 "num_unmap_ops": 0, 00:16:08.554 "bytes_copied": 0, 00:16:08.554 "num_copy_ops": 0, 00:16:08.554 "read_latency_ticks": 1071806811512, 00:16:08.554 "max_read_latency_ticks": 10416422, 00:16:08.554 "min_read_latency_ticks": 6634418, 00:16:08.554 "write_latency_ticks": 0, 00:16:08.554 "max_write_latency_ticks": 0, 00:16:08.554 "min_write_latency_ticks": 0, 00:16:08.554 "unmap_latency_ticks": 0, 00:16:08.554 "max_unmap_latency_ticks": 0, 00:16:08.554 "min_unmap_latency_ticks": 0, 00:16:08.554 "copy_latency_ticks": 0, 00:16:08.554 "max_copy_latency_ticks": 0, 00:16:08.554 "min_copy_latency_ticks": 0 00:16:08.554 }, 00:16:08.554 { 00:16:08.554 "thread_id": 3, 00:16:08.554 "bytes_read": 467664896, 00:16:08.554 "num_read_ops": 114176, 00:16:08.554 "bytes_written": 0, 00:16:08.554 "num_write_ops": 0, 00:16:08.554 "bytes_unmapped": 0, 00:16:08.554 "num_unmap_ops": 0, 00:16:08.554 "bytes_copied": 0, 00:16:08.554 "num_copy_ops": 0, 00:16:08.554 "read_latency_ticks": 1073332784262, 00:16:08.554 "max_read_latency_ticks": 20629210, 00:16:08.554 "min_read_latency_ticks": 7765684, 00:16:08.554 "write_latency_ticks": 0, 00:16:08.554 "max_write_latency_ticks": 0, 00:16:08.554 "min_write_latency_ticks": 0, 00:16:08.554 "unmap_latency_ticks": 0, 00:16:08.554 "max_unmap_latency_ticks": 0, 00:16:08.554 "min_unmap_latency_ticks": 0, 00:16:08.554 "copy_latency_ticks": 0, 00:16:08.554 "max_copy_latency_ticks": 0, 00:16:08.554 "min_copy_latency_ticks": 0 00:16:08.554 } 00:16:08.554 ] 00:16:08.554 }' 00:16:08.813 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:16:08.813 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=116224 00:16:08.813 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=116224 00:16:08.813 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:16:08.813 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=114176 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=230400 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:16:08.814 "tick_rate": 2100000000, 00:16:08.814 "ticks": 1888773040548, 00:16:08.814 "bdevs": [ 00:16:08.814 { 00:16:08.814 "name": "Malloc_STAT", 00:16:08.814 "bytes_read": 1000378880, 00:16:08.814 "num_read_ops": 244227, 00:16:08.814 "bytes_written": 0, 00:16:08.814 "num_write_ops": 0, 00:16:08.814 "bytes_unmapped": 0, 00:16:08.814 "num_unmap_ops": 0, 00:16:08.814 "bytes_copied": 0, 00:16:08.814 "num_copy_ops": 0, 00:16:08.814 "read_latency_ticks": 2272787006006, 00:16:08.814 "max_read_latency_ticks": 20629210, 00:16:08.814 "min_read_latency_ticks": 254980, 00:16:08.814 "write_latency_ticks": 0, 00:16:08.814 "max_write_latency_ticks": 0, 00:16:08.814 "min_write_latency_ticks": 0, 00:16:08.814 "unmap_latency_ticks": 0, 00:16:08.814 "max_unmap_latency_ticks": 0, 00:16:08.814 "min_unmap_latency_ticks": 0, 00:16:08.814 "copy_latency_ticks": 0, 00:16:08.814 "max_copy_latency_ticks": 0, 00:16:08.814 "min_copy_latency_ticks": 0, 00:16:08.814 "io_error": {} 00:16:08.814 } 00:16:08.814 ] 00:16:08.814 }' 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=244227 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 230400 -lt 222467 ']' 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 230400 -gt 244227 ']' 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.814 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:08.814 00:16:08.814 Latency(us) 00:16:08.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.814 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:16:08.814 Malloc_STAT : 2.18 58151.47 227.15 0.00 0.00 4392.27 1006.45 4962.01 00:16:08.814 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:16:08.814 Malloc_STAT : 2.19 57279.00 223.75 0.00 0.00 4458.87 733.38 9861.61 00:16:08.814 =================================================================================================================== 00:16:08.814 Total : 115430.47 450.90 0.00 0.00 4425.33 733.38 9861.61 00:16:09.072 0 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 120998 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 120998 ']' 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 120998 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120998 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:09.072 killing process with pid 120998 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120998' 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 120998 00:16:09.072 Received shutdown signal, test time was about 2.340625 seconds 00:16:09.072 00:16:09.072 Latency(us) 00:16:09.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.072 =================================================================================================================== 00:16:09.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:09.072 00:42:31 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 120998 00:16:10.449 00:42:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:16:10.449 00:16:10.449 real 0m5.051s 00:16:10.449 user 0m9.536s 00:16:10.449 sys 0m0.434s 00:16:10.449 00:42:32 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:10.449 00:42:32 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:10.449 ************************************ 00:16:10.449 END TEST bdev_stat 00:16:10.449 ************************************ 00:16:10.449 00:42:33 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:16:10.449 00:42:33 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:16:10.449 00:42:33 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:10.449 00:42:33 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:16:10.449 00:42:33 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:10.449 00:42:33 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:10.449 00:42:33 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:16:10.449 00:42:33 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:16:10.449 00:42:33 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:16:10.449 00:42:33 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:16:10.449 00:16:10.449 real 2m37.061s 00:16:10.449 user 6m3.830s 00:16:10.449 sys 0m25.327s 00:16:10.449 00:42:33 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:10.449 00:42:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:16:10.449 ************************************ 00:16:10.449 END TEST blockdev_general 00:16:10.449 ************************************ 00:16:10.708 00:42:33 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:10.708 00:42:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:10.708 00:42:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.708 00:42:33 -- common/autotest_common.sh@10 -- # set +x 00:16:10.708 ************************************ 00:16:10.708 START TEST bdev_raid 00:16:10.708 ************************************ 00:16:10.708 00:42:33 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:10.708 * Looking for test storage... 00:16:10.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:10.708 00:42:33 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:10.708 00:42:33 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:16:10.708 00:42:33 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:16:10.708 00:42:33 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:16:10.708 00:42:33 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:16:10.708 00:42:33 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:16:10.708 00:42:33 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:16:10.708 00:42:33 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:16:10.708 00:42:33 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:16:10.708 00:42:33 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:16:10.708 00:42:33 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:16:10.708 00:42:33 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:16:10.708 00:42:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:10.708 00:42:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.708 00:42:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.708 ************************************ 00:16:10.708 START TEST raid_function_test_raid0 00:16:10.708 ************************************ 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=121159 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 121159' 00:16:10.708 Process raid pid: 121159 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 121159 /var/tmp/spdk-raid.sock 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 121159 ']' 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:10.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:10.708 00:42:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:16:10.708 [2024-07-25 00:42:33.360059] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:16:10.709 [2024-07-25 00:42:33.360270] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.967 [2024-07-25 00:42:33.537961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.225 [2024-07-25 00:42:33.740979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.483 [2024-07-25 00:42:33.948095] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.739 00:42:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:11.739 00:42:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:16:11.739 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:16:11.739 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:16:11.739 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:11.739 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:16:11.739 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:11.996 [2024-07-25 00:42:34.580636] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:11.996 [2024-07-25 00:42:34.582580] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:11.996 [2024-07-25 00:42:34.582681] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:11.996 [2024-07-25 00:42:34.582694] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:11.996 [2024-07-25 00:42:34.582848] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:11.996 [2024-07-25 00:42:34.583177] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:11.996 [2024-07-25 00:42:34.583197] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:16:11.996 [2024-07-25 00:42:34.583364] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.996 Base_1 00:16:11.996 Base_2 00:16:11.996 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:11.996 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:16:11.996 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.254 00:42:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:12.512 [2024-07-25 00:42:35.037701] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:12.512 /dev/nbd0 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.512 1+0 records in 00:16:12.512 1+0 records out 00:16:12.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400829 s, 10.2 MB/s 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:12.512 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:12.770 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:12.770 { 00:16:12.770 "nbd_device": "/dev/nbd0", 00:16:12.770 "bdev_name": "raid" 00:16:12.770 } 00:16:12.770 ]' 00:16:12.770 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:12.770 { 00:16:12.770 "nbd_device": "/dev/nbd0", 00:16:12.770 "bdev_name": "raid" 00:16:12.770 } 00:16:12.770 ]' 00:16:12.770 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:16:13.029 4096+0 records in 00:16:13.029 4096+0 records out 00:16:13.029 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0229205 s, 91.5 MB/s 00:16:13.029 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:13.288 4096+0 records in 00:16:13.288 4096+0 records out 00:16:13.288 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.274915 s, 7.6 MB/s 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:13.288 128+0 records in 00:16:13.288 128+0 records out 00:16:13.288 65536 bytes (66 kB, 64 KiB) copied, 0.000929965 s, 70.5 MB/s 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:13.288 2035+0 records in 00:16:13.288 2035+0 records out 00:16:13.288 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0101888 s, 102 MB/s 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:13.288 456+0 records in 00:16:13.288 456+0 records out 00:16:13.288 233472 bytes (233 kB, 228 KiB) copied, 0.0028096 s, 83.1 MB/s 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.288 00:42:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:13.548 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:13.548 [2024-07-25 00:42:36.061201] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.548 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:13.548 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:13.548 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.548 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.548 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:13.548 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:16:13.548 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.548 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:13.548 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:13.548 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 121159 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 121159 ']' 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 121159 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121159 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121159' 00:16:13.807 killing process with pid 121159 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 121159 00:16:13.807 [2024-07-25 00:42:36.400201] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.807 [2024-07-25 00:42:36.400288] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.807 00:42:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 121159 00:16:13.807 [2024-07-25 00:42:36.400334] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.807 [2024-07-25 00:42:36.400344] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:16:14.069 [2024-07-25 00:42:36.612998] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.447 00:42:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:16:15.447 00:16:15.447 real 0m4.688s 00:16:15.447 user 0m5.772s 00:16:15.447 sys 0m1.003s 00:16:15.447 00:42:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.447 00:42:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:16:15.447 ************************************ 00:16:15.447 END TEST raid_function_test_raid0 00:16:15.447 ************************************ 00:16:15.447 00:42:38 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:16:15.447 00:42:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:15.447 00:42:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.447 00:42:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.447 ************************************ 00:16:15.447 START TEST raid_function_test_concat 00:16:15.447 ************************************ 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=121325 00:16:15.447 Process raid pid: 121325 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 121325' 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 121325 /var/tmp/spdk-raid.sock 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 121325 ']' 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.447 00:42:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:15.710 [2024-07-25 00:42:38.110756] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:16:15.710 [2024-07-25 00:42:38.111465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.710 [2024-07-25 00:42:38.296882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.969 [2024-07-25 00:42:38.562871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.228 [2024-07-25 00:42:38.769994] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.487 00:42:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.487 00:42:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:16:16.487 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:16:16.487 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:16:16.487 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:16.487 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:16:16.487 00:42:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:16.746 [2024-07-25 00:42:39.264810] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:16.746 [2024-07-25 00:42:39.266983] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:16.747 [2024-07-25 00:42:39.267162] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:16.747 [2024-07-25 00:42:39.267269] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:16.747 [2024-07-25 00:42:39.267481] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:16.747 [2024-07-25 00:42:39.267835] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:16.747 [2024-07-25 00:42:39.267955] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:16:16.747 [2024-07-25 00:42:39.268200] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.747 Base_1 00:16:16.747 Base_2 00:16:16.747 00:42:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:16.747 00:42:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:16.747 00:42:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.006 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:17.265 [2024-07-25 00:42:39.708941] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:17.265 /dev/nbd0 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.265 1+0 records in 00:16:17.265 1+0 records out 00:16:17.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245655 s, 16.7 MB/s 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:17.265 00:42:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:17.525 { 00:16:17.525 "nbd_device": "/dev/nbd0", 00:16:17.525 "bdev_name": "raid" 00:16:17.525 } 00:16:17.525 ]' 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:17.525 { 00:16:17.525 "nbd_device": "/dev/nbd0", 00:16:17.525 "bdev_name": "raid" 00:16:17.525 } 00:16:17.525 ]' 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:16:17.525 4096+0 records in 00:16:17.525 4096+0 records out 00:16:17.525 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0194104 s, 108 MB/s 00:16:17.525 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:17.784 4096+0 records in 00:16:17.784 4096+0 records out 00:16:17.784 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.263659 s, 8.0 MB/s 00:16:17.784 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:16:17.784 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:17.785 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:16:17.785 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:17.785 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:16:17.785 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:16:17.785 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:17.785 128+0 records in 00:16:17.785 128+0 records out 00:16:17.785 65536 bytes (66 kB, 64 KiB) copied, 0.000739145 s, 88.7 MB/s 00:16:17.785 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:18.044 2035+0 records in 00:16:18.044 2035+0 records out 00:16:18.044 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00848092 s, 123 MB/s 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:18.044 456+0 records in 00:16:18.044 456+0 records out 00:16:18.044 233472 bytes (233 kB, 228 KiB) copied, 0.00152597 s, 153 MB/s 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.044 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.304 [2024-07-25 00:42:40.715731] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:18.304 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:18.563 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:18.563 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:18.563 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 121325 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 121325 ']' 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 121325 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121325 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121325' 00:16:18.564 killing process with pid 121325 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 121325 00:16:18.564 [2024-07-25 00:42:40.996683] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.564 00:42:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 121325 00:16:18.564 [2024-07-25 00:42:40.996967] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.564 [2024-07-25 00:42:40.997053] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.564 [2024-07-25 00:42:40.997161] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:16:18.564 [2024-07-25 00:42:41.204604] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.466 00:42:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:16:20.466 00:16:20.466 real 0m4.564s 00:16:20.466 user 0m5.471s 00:16:20.466 sys 0m1.069s 00:16:20.466 00:42:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.466 00:42:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:20.466 ************************************ 00:16:20.466 END TEST raid_function_test_concat 00:16:20.466 ************************************ 00:16:20.466 00:42:42 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:16:20.466 00:42:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:20.466 00:42:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.466 00:42:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.466 ************************************ 00:16:20.466 START TEST raid0_resize_test 00:16:20.466 ************************************ 00:16:20.466 00:42:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:16:20.466 00:42:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:16:20.466 00:42:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:16:20.466 00:42:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:16:20.466 00:42:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:16:20.466 00:42:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:16:20.466 00:42:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:16:20.466 00:42:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=121489 00:16:20.466 Process raid pid: 121489 00:16:20.466 00:42:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 121489' 00:16:20.466 00:42:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 121489 /var/tmp/spdk-raid.sock 00:16:20.466 00:42:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 121489 ']' 00:16:20.467 00:42:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:20.467 00:42:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:20.467 00:42:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:20.467 00:42:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:20.467 00:42:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.467 00:42:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.467 [2024-07-25 00:42:42.743388] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:16:20.467 [2024-07-25 00:42:42.743629] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.467 [2024-07-25 00:42:42.924572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.725 [2024-07-25 00:42:43.124688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.725 [2024-07-25 00:42:43.334510] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.291 00:42:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.291 00:42:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:16:21.291 00:42:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:16:21.550 Base_1 00:16:21.550 00:42:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:16:21.550 Base_2 00:16:21.550 00:42:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:16:21.808 [2024-07-25 00:42:44.322749] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:21.808 [2024-07-25 00:42:44.324650] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:21.808 [2024-07-25 00:42:44.324726] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:21.809 [2024-07-25 00:42:44.324736] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:21.809 [2024-07-25 00:42:44.324878] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:21.809 [2024-07-25 00:42:44.325173] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:21.809 [2024-07-25 00:42:44.325192] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:16:21.809 [2024-07-25 00:42:44.325356] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.809 00:42:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:16:22.102 [2024-07-25 00:42:44.506736] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:22.102 [2024-07-25 00:42:44.506772] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:16:22.102 true 00:16:22.102 00:42:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:16:22.102 00:42:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:16:22.385 [2024-07-25 00:42:44.738886] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.385 00:42:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:16:22.385 00:42:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:16:22.385 00:42:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:16:22.385 00:42:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:16:22.385 [2024-07-25 00:42:44.918771] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:22.385 [2024-07-25 00:42:44.918805] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:16:22.385 [2024-07-25 00:42:44.918859] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:16:22.385 true 00:16:22.385 00:42:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:16:22.385 00:42:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:16:22.644 [2024-07-25 00:42:45.214954] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 121489 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 121489 ']' 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 121489 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121489 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:22.644 killing process with pid 121489 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121489' 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 121489 00:16:22.644 [2024-07-25 00:42:45.268054] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.644 00:42:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 121489 00:16:22.644 [2024-07-25 00:42:45.268125] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.644 [2024-07-25 00:42:45.268170] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.644 [2024-07-25 00:42:45.268179] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:16:22.644 [2024-07-25 00:42:45.268679] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.020 00:42:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:16:24.020 00:16:24.020 real 0m3.987s 00:16:24.020 user 0m5.403s 00:16:24.020 sys 0m0.604s 00:16:24.020 00:42:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.020 ************************************ 00:16:24.020 END TEST raid0_resize_test 00:16:24.020 ************************************ 00:16:24.020 00:42:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.279 00:42:46 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:16:24.279 00:42:46 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:24.279 00:42:46 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:16:24.279 00:42:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:24.279 00:42:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.279 00:42:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.279 ************************************ 00:16:24.279 START TEST raid_state_function_test 00:16:24.279 ************************************ 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=121580 00:16:24.279 Process raid pid: 121580 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121580' 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 121580 /var/tmp/spdk-raid.sock 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 121580 ']' 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:24.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.279 00:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.279 [2024-07-25 00:42:46.776337] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:16:24.280 [2024-07-25 00:42:46.776488] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.538 [2024-07-25 00:42:46.942070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.797 [2024-07-25 00:42:47.210416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.797 [2024-07-25 00:42:47.421035] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.055 00:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.055 00:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:25.055 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:25.314 [2024-07-25 00:42:47.823437] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.314 [2024-07-25 00:42:47.823534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.314 [2024-07-25 00:42:47.823547] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.314 [2024-07-25 00:42:47.823573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.314 00:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.573 00:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.573 "name": "Existed_Raid", 00:16:25.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.573 "strip_size_kb": 64, 00:16:25.573 "state": "configuring", 00:16:25.573 "raid_level": "raid0", 00:16:25.573 "superblock": false, 00:16:25.573 "num_base_bdevs": 2, 00:16:25.573 "num_base_bdevs_discovered": 0, 00:16:25.573 "num_base_bdevs_operational": 2, 00:16:25.573 "base_bdevs_list": [ 00:16:25.573 { 00:16:25.573 "name": "BaseBdev1", 00:16:25.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.573 "is_configured": false, 00:16:25.573 "data_offset": 0, 00:16:25.573 "data_size": 0 00:16:25.573 }, 00:16:25.573 { 00:16:25.573 "name": "BaseBdev2", 00:16:25.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.573 "is_configured": false, 00:16:25.573 "data_offset": 0, 00:16:25.573 "data_size": 0 00:16:25.573 } 00:16:25.573 ] 00:16:25.573 }' 00:16:25.573 00:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.573 00:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.141 00:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:26.400 [2024-07-25 00:42:48.807572] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.400 [2024-07-25 00:42:48.807616] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:26.400 00:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:26.401 [2024-07-25 00:42:49.007578] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.401 [2024-07-25 00:42:49.007648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.401 [2024-07-25 00:42:49.007658] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.401 [2024-07-25 00:42:49.007681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.401 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:26.659 [2024-07-25 00:42:49.232561] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.659 BaseBdev1 00:16:26.659 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:26.660 00:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:26.660 00:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:26.660 00:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:26.660 00:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:26.660 00:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:26.660 00:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:26.918 00:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:27.178 [ 00:16:27.178 { 00:16:27.178 "name": "BaseBdev1", 00:16:27.178 "aliases": [ 00:16:27.178 "4e38ddca-46b6-486a-9868-4dfee4aef4b2" 00:16:27.178 ], 00:16:27.178 "product_name": "Malloc disk", 00:16:27.178 "block_size": 512, 00:16:27.178 "num_blocks": 65536, 00:16:27.178 "uuid": "4e38ddca-46b6-486a-9868-4dfee4aef4b2", 00:16:27.178 "assigned_rate_limits": { 00:16:27.178 "rw_ios_per_sec": 0, 00:16:27.178 "rw_mbytes_per_sec": 0, 00:16:27.178 "r_mbytes_per_sec": 0, 00:16:27.178 "w_mbytes_per_sec": 0 00:16:27.178 }, 00:16:27.178 "claimed": true, 00:16:27.178 "claim_type": "exclusive_write", 00:16:27.178 "zoned": false, 00:16:27.178 "supported_io_types": { 00:16:27.178 "read": true, 00:16:27.178 "write": true, 00:16:27.178 "unmap": true, 00:16:27.178 "flush": true, 00:16:27.178 "reset": true, 00:16:27.178 "nvme_admin": false, 00:16:27.178 "nvme_io": false, 00:16:27.178 "nvme_io_md": false, 00:16:27.178 "write_zeroes": true, 00:16:27.178 "zcopy": true, 00:16:27.178 "get_zone_info": false, 00:16:27.178 "zone_management": false, 00:16:27.178 "zone_append": false, 00:16:27.178 "compare": false, 00:16:27.178 "compare_and_write": false, 00:16:27.178 "abort": true, 00:16:27.178 "seek_hole": false, 00:16:27.178 "seek_data": false, 00:16:27.178 "copy": true, 00:16:27.178 "nvme_iov_md": false 00:16:27.178 }, 00:16:27.178 "memory_domains": [ 00:16:27.178 { 00:16:27.178 "dma_device_id": "system", 00:16:27.178 "dma_device_type": 1 00:16:27.178 }, 00:16:27.178 { 00:16:27.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.178 "dma_device_type": 2 00:16:27.178 } 00:16:27.178 ], 00:16:27.178 "driver_specific": {} 00:16:27.178 } 00:16:27.178 ] 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.178 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.437 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.437 "name": "Existed_Raid", 00:16:27.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.437 "strip_size_kb": 64, 00:16:27.437 "state": "configuring", 00:16:27.437 "raid_level": "raid0", 00:16:27.438 "superblock": false, 00:16:27.438 "num_base_bdevs": 2, 00:16:27.438 "num_base_bdevs_discovered": 1, 00:16:27.438 "num_base_bdevs_operational": 2, 00:16:27.438 "base_bdevs_list": [ 00:16:27.438 { 00:16:27.438 "name": "BaseBdev1", 00:16:27.438 "uuid": "4e38ddca-46b6-486a-9868-4dfee4aef4b2", 00:16:27.438 "is_configured": true, 00:16:27.438 "data_offset": 0, 00:16:27.438 "data_size": 65536 00:16:27.438 }, 00:16:27.438 { 00:16:27.438 "name": "BaseBdev2", 00:16:27.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.438 "is_configured": false, 00:16:27.438 "data_offset": 0, 00:16:27.438 "data_size": 0 00:16:27.438 } 00:16:27.438 ] 00:16:27.438 }' 00:16:27.438 00:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.438 00:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.006 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:28.006 [2024-07-25 00:42:50.612848] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.006 [2024-07-25 00:42:50.612922] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:28.006 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:28.265 [2024-07-25 00:42:50.876929] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.265 [2024-07-25 00:42:50.878882] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.265 [2024-07-25 00:42:50.878941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.265 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.266 00:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.525 00:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.525 "name": "Existed_Raid", 00:16:28.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.525 "strip_size_kb": 64, 00:16:28.525 "state": "configuring", 00:16:28.525 "raid_level": "raid0", 00:16:28.525 "superblock": false, 00:16:28.525 "num_base_bdevs": 2, 00:16:28.525 "num_base_bdevs_discovered": 1, 00:16:28.525 "num_base_bdevs_operational": 2, 00:16:28.525 "base_bdevs_list": [ 00:16:28.525 { 00:16:28.525 "name": "BaseBdev1", 00:16:28.525 "uuid": "4e38ddca-46b6-486a-9868-4dfee4aef4b2", 00:16:28.525 "is_configured": true, 00:16:28.525 "data_offset": 0, 00:16:28.525 "data_size": 65536 00:16:28.525 }, 00:16:28.525 { 00:16:28.525 "name": "BaseBdev2", 00:16:28.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.525 "is_configured": false, 00:16:28.525 "data_offset": 0, 00:16:28.525 "data_size": 0 00:16:28.525 } 00:16:28.525 ] 00:16:28.525 }' 00:16:28.525 00:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.525 00:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.092 00:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:29.352 [2024-07-25 00:42:51.911618] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.352 [2024-07-25 00:42:51.911672] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:29.352 [2024-07-25 00:42:51.911696] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:29.352 [2024-07-25 00:42:51.911812] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:29.352 [2024-07-25 00:42:51.912110] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:29.352 [2024-07-25 00:42:51.912121] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:29.352 [2024-07-25 00:42:51.912365] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.352 BaseBdev2 00:16:29.352 00:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:29.352 00:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:29.352 00:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:29.352 00:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:29.352 00:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:29.352 00:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:29.352 00:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.611 00:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:29.870 [ 00:16:29.870 { 00:16:29.870 "name": "BaseBdev2", 00:16:29.870 "aliases": [ 00:16:29.870 "b70517dd-a6d0-4407-8a14-c4004dc9e723" 00:16:29.870 ], 00:16:29.870 "product_name": "Malloc disk", 00:16:29.870 "block_size": 512, 00:16:29.870 "num_blocks": 65536, 00:16:29.870 "uuid": "b70517dd-a6d0-4407-8a14-c4004dc9e723", 00:16:29.870 "assigned_rate_limits": { 00:16:29.870 "rw_ios_per_sec": 0, 00:16:29.870 "rw_mbytes_per_sec": 0, 00:16:29.870 "r_mbytes_per_sec": 0, 00:16:29.870 "w_mbytes_per_sec": 0 00:16:29.870 }, 00:16:29.870 "claimed": true, 00:16:29.870 "claim_type": "exclusive_write", 00:16:29.870 "zoned": false, 00:16:29.870 "supported_io_types": { 00:16:29.870 "read": true, 00:16:29.870 "write": true, 00:16:29.870 "unmap": true, 00:16:29.870 "flush": true, 00:16:29.870 "reset": true, 00:16:29.870 "nvme_admin": false, 00:16:29.870 "nvme_io": false, 00:16:29.870 "nvme_io_md": false, 00:16:29.870 "write_zeroes": true, 00:16:29.870 "zcopy": true, 00:16:29.870 "get_zone_info": false, 00:16:29.870 "zone_management": false, 00:16:29.870 "zone_append": false, 00:16:29.870 "compare": false, 00:16:29.870 "compare_and_write": false, 00:16:29.870 "abort": true, 00:16:29.870 "seek_hole": false, 00:16:29.870 "seek_data": false, 00:16:29.870 "copy": true, 00:16:29.870 "nvme_iov_md": false 00:16:29.870 }, 00:16:29.870 "memory_domains": [ 00:16:29.870 { 00:16:29.870 "dma_device_id": "system", 00:16:29.870 "dma_device_type": 1 00:16:29.870 }, 00:16:29.870 { 00:16:29.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.870 "dma_device_type": 2 00:16:29.870 } 00:16:29.870 ], 00:16:29.870 "driver_specific": {} 00:16:29.870 } 00:16:29.870 ] 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.870 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.129 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.129 "name": "Existed_Raid", 00:16:30.129 "uuid": "d001be3d-a341-466d-9e89-0abf9f1c41ea", 00:16:30.129 "strip_size_kb": 64, 00:16:30.129 "state": "online", 00:16:30.129 "raid_level": "raid0", 00:16:30.129 "superblock": false, 00:16:30.129 "num_base_bdevs": 2, 00:16:30.129 "num_base_bdevs_discovered": 2, 00:16:30.129 "num_base_bdevs_operational": 2, 00:16:30.129 "base_bdevs_list": [ 00:16:30.129 { 00:16:30.129 "name": "BaseBdev1", 00:16:30.129 "uuid": "4e38ddca-46b6-486a-9868-4dfee4aef4b2", 00:16:30.129 "is_configured": true, 00:16:30.129 "data_offset": 0, 00:16:30.129 "data_size": 65536 00:16:30.129 }, 00:16:30.129 { 00:16:30.129 "name": "BaseBdev2", 00:16:30.129 "uuid": "b70517dd-a6d0-4407-8a14-c4004dc9e723", 00:16:30.129 "is_configured": true, 00:16:30.129 "data_offset": 0, 00:16:30.129 "data_size": 65536 00:16:30.129 } 00:16:30.129 ] 00:16:30.129 }' 00:16:30.129 00:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.129 00:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.736 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:30.736 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:30.736 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:30.736 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:30.736 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:30.736 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:30.736 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:30.736 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:30.994 [2024-07-25 00:42:53.472206] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.994 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:30.994 "name": "Existed_Raid", 00:16:30.994 "aliases": [ 00:16:30.994 "d001be3d-a341-466d-9e89-0abf9f1c41ea" 00:16:30.994 ], 00:16:30.994 "product_name": "Raid Volume", 00:16:30.994 "block_size": 512, 00:16:30.994 "num_blocks": 131072, 00:16:30.994 "uuid": "d001be3d-a341-466d-9e89-0abf9f1c41ea", 00:16:30.994 "assigned_rate_limits": { 00:16:30.994 "rw_ios_per_sec": 0, 00:16:30.994 "rw_mbytes_per_sec": 0, 00:16:30.994 "r_mbytes_per_sec": 0, 00:16:30.994 "w_mbytes_per_sec": 0 00:16:30.994 }, 00:16:30.994 "claimed": false, 00:16:30.994 "zoned": false, 00:16:30.994 "supported_io_types": { 00:16:30.994 "read": true, 00:16:30.994 "write": true, 00:16:30.994 "unmap": true, 00:16:30.994 "flush": true, 00:16:30.994 "reset": true, 00:16:30.994 "nvme_admin": false, 00:16:30.994 "nvme_io": false, 00:16:30.994 "nvme_io_md": false, 00:16:30.994 "write_zeroes": true, 00:16:30.994 "zcopy": false, 00:16:30.994 "get_zone_info": false, 00:16:30.994 "zone_management": false, 00:16:30.994 "zone_append": false, 00:16:30.994 "compare": false, 00:16:30.994 "compare_and_write": false, 00:16:30.994 "abort": false, 00:16:30.994 "seek_hole": false, 00:16:30.994 "seek_data": false, 00:16:30.994 "copy": false, 00:16:30.994 "nvme_iov_md": false 00:16:30.994 }, 00:16:30.994 "memory_domains": [ 00:16:30.994 { 00:16:30.994 "dma_device_id": "system", 00:16:30.994 "dma_device_type": 1 00:16:30.994 }, 00:16:30.994 { 00:16:30.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.994 "dma_device_type": 2 00:16:30.994 }, 00:16:30.994 { 00:16:30.994 "dma_device_id": "system", 00:16:30.994 "dma_device_type": 1 00:16:30.994 }, 00:16:30.994 { 00:16:30.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.994 "dma_device_type": 2 00:16:30.994 } 00:16:30.994 ], 00:16:30.994 "driver_specific": { 00:16:30.994 "raid": { 00:16:30.994 "uuid": "d001be3d-a341-466d-9e89-0abf9f1c41ea", 00:16:30.994 "strip_size_kb": 64, 00:16:30.994 "state": "online", 00:16:30.994 "raid_level": "raid0", 00:16:30.994 "superblock": false, 00:16:30.994 "num_base_bdevs": 2, 00:16:30.994 "num_base_bdevs_discovered": 2, 00:16:30.994 "num_base_bdevs_operational": 2, 00:16:30.994 "base_bdevs_list": [ 00:16:30.994 { 00:16:30.994 "name": "BaseBdev1", 00:16:30.994 "uuid": "4e38ddca-46b6-486a-9868-4dfee4aef4b2", 00:16:30.994 "is_configured": true, 00:16:30.994 "data_offset": 0, 00:16:30.994 "data_size": 65536 00:16:30.994 }, 00:16:30.994 { 00:16:30.994 "name": "BaseBdev2", 00:16:30.994 "uuid": "b70517dd-a6d0-4407-8a14-c4004dc9e723", 00:16:30.994 "is_configured": true, 00:16:30.994 "data_offset": 0, 00:16:30.994 "data_size": 65536 00:16:30.994 } 00:16:30.994 ] 00:16:30.994 } 00:16:30.994 } 00:16:30.994 }' 00:16:30.994 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.994 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:30.994 BaseBdev2' 00:16:30.994 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:30.994 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:30.994 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.253 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.253 "name": "BaseBdev1", 00:16:31.253 "aliases": [ 00:16:31.253 "4e38ddca-46b6-486a-9868-4dfee4aef4b2" 00:16:31.253 ], 00:16:31.253 "product_name": "Malloc disk", 00:16:31.253 "block_size": 512, 00:16:31.253 "num_blocks": 65536, 00:16:31.253 "uuid": "4e38ddca-46b6-486a-9868-4dfee4aef4b2", 00:16:31.253 "assigned_rate_limits": { 00:16:31.253 "rw_ios_per_sec": 0, 00:16:31.253 "rw_mbytes_per_sec": 0, 00:16:31.253 "r_mbytes_per_sec": 0, 00:16:31.253 "w_mbytes_per_sec": 0 00:16:31.253 }, 00:16:31.253 "claimed": true, 00:16:31.253 "claim_type": "exclusive_write", 00:16:31.253 "zoned": false, 00:16:31.253 "supported_io_types": { 00:16:31.253 "read": true, 00:16:31.253 "write": true, 00:16:31.253 "unmap": true, 00:16:31.253 "flush": true, 00:16:31.253 "reset": true, 00:16:31.253 "nvme_admin": false, 00:16:31.253 "nvme_io": false, 00:16:31.253 "nvme_io_md": false, 00:16:31.253 "write_zeroes": true, 00:16:31.253 "zcopy": true, 00:16:31.253 "get_zone_info": false, 00:16:31.253 "zone_management": false, 00:16:31.253 "zone_append": false, 00:16:31.253 "compare": false, 00:16:31.253 "compare_and_write": false, 00:16:31.253 "abort": true, 00:16:31.253 "seek_hole": false, 00:16:31.253 "seek_data": false, 00:16:31.253 "copy": true, 00:16:31.253 "nvme_iov_md": false 00:16:31.253 }, 00:16:31.253 "memory_domains": [ 00:16:31.253 { 00:16:31.253 "dma_device_id": "system", 00:16:31.253 "dma_device_type": 1 00:16:31.253 }, 00:16:31.253 { 00:16:31.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.253 "dma_device_type": 2 00:16:31.253 } 00:16:31.253 ], 00:16:31.253 "driver_specific": {} 00:16:31.253 }' 00:16:31.253 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.253 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.253 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.253 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.253 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.253 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.253 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.512 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.513 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.513 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.513 00:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.513 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.513 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.513 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.513 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:31.772 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.772 "name": "BaseBdev2", 00:16:31.772 "aliases": [ 00:16:31.772 "b70517dd-a6d0-4407-8a14-c4004dc9e723" 00:16:31.772 ], 00:16:31.772 "product_name": "Malloc disk", 00:16:31.772 "block_size": 512, 00:16:31.772 "num_blocks": 65536, 00:16:31.772 "uuid": "b70517dd-a6d0-4407-8a14-c4004dc9e723", 00:16:31.772 "assigned_rate_limits": { 00:16:31.772 "rw_ios_per_sec": 0, 00:16:31.772 "rw_mbytes_per_sec": 0, 00:16:31.772 "r_mbytes_per_sec": 0, 00:16:31.772 "w_mbytes_per_sec": 0 00:16:31.772 }, 00:16:31.772 "claimed": true, 00:16:31.772 "claim_type": "exclusive_write", 00:16:31.772 "zoned": false, 00:16:31.772 "supported_io_types": { 00:16:31.772 "read": true, 00:16:31.772 "write": true, 00:16:31.772 "unmap": true, 00:16:31.772 "flush": true, 00:16:31.772 "reset": true, 00:16:31.772 "nvme_admin": false, 00:16:31.772 "nvme_io": false, 00:16:31.772 "nvme_io_md": false, 00:16:31.772 "write_zeroes": true, 00:16:31.772 "zcopy": true, 00:16:31.772 "get_zone_info": false, 00:16:31.772 "zone_management": false, 00:16:31.772 "zone_append": false, 00:16:31.772 "compare": false, 00:16:31.772 "compare_and_write": false, 00:16:31.772 "abort": true, 00:16:31.772 "seek_hole": false, 00:16:31.772 "seek_data": false, 00:16:31.772 "copy": true, 00:16:31.772 "nvme_iov_md": false 00:16:31.772 }, 00:16:31.772 "memory_domains": [ 00:16:31.772 { 00:16:31.772 "dma_device_id": "system", 00:16:31.772 "dma_device_type": 1 00:16:31.772 }, 00:16:31.772 { 00:16:31.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.772 "dma_device_type": 2 00:16:31.772 } 00:16:31.772 ], 00:16:31.772 "driver_specific": {} 00:16:31.772 }' 00:16:31.772 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.772 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.030 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.030 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.030 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.030 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.030 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.030 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.030 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.030 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.030 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.289 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.289 00:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:32.548 [2024-07-25 00:42:54.944268] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.548 [2024-07-25 00:42:54.944301] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.548 [2024-07-25 00:42:54.944350] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.548 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.807 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.807 "name": "Existed_Raid", 00:16:32.807 "uuid": "d001be3d-a341-466d-9e89-0abf9f1c41ea", 00:16:32.807 "strip_size_kb": 64, 00:16:32.807 "state": "offline", 00:16:32.807 "raid_level": "raid0", 00:16:32.807 "superblock": false, 00:16:32.807 "num_base_bdevs": 2, 00:16:32.807 "num_base_bdevs_discovered": 1, 00:16:32.807 "num_base_bdevs_operational": 1, 00:16:32.807 "base_bdevs_list": [ 00:16:32.807 { 00:16:32.807 "name": null, 00:16:32.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.807 "is_configured": false, 00:16:32.807 "data_offset": 0, 00:16:32.807 "data_size": 65536 00:16:32.807 }, 00:16:32.807 { 00:16:32.807 "name": "BaseBdev2", 00:16:32.807 "uuid": "b70517dd-a6d0-4407-8a14-c4004dc9e723", 00:16:32.807 "is_configured": true, 00:16:32.807 "data_offset": 0, 00:16:32.808 "data_size": 65536 00:16:32.808 } 00:16:32.808 ] 00:16:32.808 }' 00:16:32.808 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.808 00:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.376 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:33.376 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:33.376 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.376 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:33.376 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:33.376 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.376 00:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:33.635 [2024-07-25 00:42:56.243719] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.635 [2024-07-25 00:42:56.243786] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:33.894 00:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:33.894 00:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:33.895 00:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.895 00:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 121580 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 121580 ']' 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 121580 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121580 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:34.154 killing process with pid 121580 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121580' 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 121580 00:16:34.154 [2024-07-25 00:42:56.664898] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.154 00:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 121580 00:16:34.154 [2024-07-25 00:42:56.665029] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.531 ************************************ 00:16:35.531 END TEST raid_state_function_test 00:16:35.531 ************************************ 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:35.531 00:16:35.531 real 0m11.334s 00:16:35.531 user 0m19.180s 00:16:35.531 sys 0m1.661s 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.531 00:42:58 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:16:35.531 00:42:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:35.531 00:42:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:35.531 00:42:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.531 ************************************ 00:16:35.531 START TEST raid_state_function_test_sb 00:16:35.531 ************************************ 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:35.531 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=121956 00:16:35.532 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121956' 00:16:35.532 Process raid pid: 121956 00:16:35.532 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 121956 /var/tmp/spdk-raid.sock 00:16:35.532 00:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:35.532 00:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 121956 ']' 00:16:35.532 00:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:35.532 00:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:35.532 00:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:35.532 00:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.532 00:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.790 [2024-07-25 00:42:58.199608] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:16:35.791 [2024-07-25 00:42:58.200417] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.791 [2024-07-25 00:42:58.380079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.050 [2024-07-25 00:42:58.588138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.309 [2024-07-25 00:42:58.795447] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.567 00:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.567 00:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:36.567 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:36.827 [2024-07-25 00:42:59.295308] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.827 [2024-07-25 00:42:59.295636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.827 [2024-07-25 00:42:59.295745] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.827 [2024-07-25 00:42:59.295856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.827 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.087 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.087 "name": "Existed_Raid", 00:16:37.087 "uuid": "5efa4849-5a71-4a71-995d-7bc849f77f75", 00:16:37.087 "strip_size_kb": 64, 00:16:37.087 "state": "configuring", 00:16:37.087 "raid_level": "raid0", 00:16:37.087 "superblock": true, 00:16:37.087 "num_base_bdevs": 2, 00:16:37.087 "num_base_bdevs_discovered": 0, 00:16:37.087 "num_base_bdevs_operational": 2, 00:16:37.087 "base_bdevs_list": [ 00:16:37.087 { 00:16:37.087 "name": "BaseBdev1", 00:16:37.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.087 "is_configured": false, 00:16:37.087 "data_offset": 0, 00:16:37.087 "data_size": 0 00:16:37.087 }, 00:16:37.087 { 00:16:37.087 "name": "BaseBdev2", 00:16:37.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.087 "is_configured": false, 00:16:37.087 "data_offset": 0, 00:16:37.087 "data_size": 0 00:16:37.087 } 00:16:37.087 ] 00:16:37.087 }' 00:16:37.087 00:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.087 00:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.656 00:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:37.656 [2024-07-25 00:43:00.279404] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.656 [2024-07-25 00:43:00.279598] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:37.656 00:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:37.915 [2024-07-25 00:43:00.535475] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:37.915 [2024-07-25 00:43:00.535732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:37.915 [2024-07-25 00:43:00.535809] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.915 [2024-07-25 00:43:00.535921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.915 00:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:38.183 [2024-07-25 00:43:00.760777] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.183 BaseBdev1 00:16:38.183 00:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:38.183 00:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:38.183 00:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:38.183 00:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:38.183 00:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:38.183 00:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:38.183 00:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.442 00:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:38.701 [ 00:16:38.701 { 00:16:38.701 "name": "BaseBdev1", 00:16:38.701 "aliases": [ 00:16:38.701 "ab0f1442-7f70-4c09-b48b-31220a9818f7" 00:16:38.701 ], 00:16:38.701 "product_name": "Malloc disk", 00:16:38.701 "block_size": 512, 00:16:38.701 "num_blocks": 65536, 00:16:38.701 "uuid": "ab0f1442-7f70-4c09-b48b-31220a9818f7", 00:16:38.701 "assigned_rate_limits": { 00:16:38.701 "rw_ios_per_sec": 0, 00:16:38.701 "rw_mbytes_per_sec": 0, 00:16:38.701 "r_mbytes_per_sec": 0, 00:16:38.701 "w_mbytes_per_sec": 0 00:16:38.701 }, 00:16:38.701 "claimed": true, 00:16:38.701 "claim_type": "exclusive_write", 00:16:38.701 "zoned": false, 00:16:38.701 "supported_io_types": { 00:16:38.701 "read": true, 00:16:38.701 "write": true, 00:16:38.701 "unmap": true, 00:16:38.701 "flush": true, 00:16:38.701 "reset": true, 00:16:38.701 "nvme_admin": false, 00:16:38.701 "nvme_io": false, 00:16:38.701 "nvme_io_md": false, 00:16:38.702 "write_zeroes": true, 00:16:38.702 "zcopy": true, 00:16:38.702 "get_zone_info": false, 00:16:38.702 "zone_management": false, 00:16:38.702 "zone_append": false, 00:16:38.702 "compare": false, 00:16:38.702 "compare_and_write": false, 00:16:38.702 "abort": true, 00:16:38.702 "seek_hole": false, 00:16:38.702 "seek_data": false, 00:16:38.702 "copy": true, 00:16:38.702 "nvme_iov_md": false 00:16:38.702 }, 00:16:38.702 "memory_domains": [ 00:16:38.702 { 00:16:38.702 "dma_device_id": "system", 00:16:38.702 "dma_device_type": 1 00:16:38.702 }, 00:16:38.702 { 00:16:38.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.702 "dma_device_type": 2 00:16:38.702 } 00:16:38.702 ], 00:16:38.702 "driver_specific": {} 00:16:38.702 } 00:16:38.702 ] 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.702 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.961 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.961 "name": "Existed_Raid", 00:16:38.961 "uuid": "e39247ff-ff1a-4ed0-8f20-5dcc4edc793a", 00:16:38.961 "strip_size_kb": 64, 00:16:38.961 "state": "configuring", 00:16:38.961 "raid_level": "raid0", 00:16:38.961 "superblock": true, 00:16:38.961 "num_base_bdevs": 2, 00:16:38.961 "num_base_bdevs_discovered": 1, 00:16:38.961 "num_base_bdevs_operational": 2, 00:16:38.961 "base_bdevs_list": [ 00:16:38.961 { 00:16:38.961 "name": "BaseBdev1", 00:16:38.961 "uuid": "ab0f1442-7f70-4c09-b48b-31220a9818f7", 00:16:38.961 "is_configured": true, 00:16:38.961 "data_offset": 2048, 00:16:38.961 "data_size": 63488 00:16:38.961 }, 00:16:38.961 { 00:16:38.961 "name": "BaseBdev2", 00:16:38.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.961 "is_configured": false, 00:16:38.961 "data_offset": 0, 00:16:38.961 "data_size": 0 00:16:38.961 } 00:16:38.961 ] 00:16:38.961 }' 00:16:38.961 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.961 00:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.528 00:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:39.528 [2024-07-25 00:43:02.125068] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:39.528 [2024-07-25 00:43:02.125250] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:39.529 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:39.788 [2024-07-25 00:43:02.377150] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.788 [2024-07-25 00:43:02.379297] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.788 [2024-07-25 00:43:02.379459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.788 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.047 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.047 "name": "Existed_Raid", 00:16:40.047 "uuid": "dcb38936-9522-4a9c-8aef-14273d015478", 00:16:40.047 "strip_size_kb": 64, 00:16:40.047 "state": "configuring", 00:16:40.047 "raid_level": "raid0", 00:16:40.047 "superblock": true, 00:16:40.047 "num_base_bdevs": 2, 00:16:40.047 "num_base_bdevs_discovered": 1, 00:16:40.047 "num_base_bdevs_operational": 2, 00:16:40.047 "base_bdevs_list": [ 00:16:40.047 { 00:16:40.047 "name": "BaseBdev1", 00:16:40.047 "uuid": "ab0f1442-7f70-4c09-b48b-31220a9818f7", 00:16:40.047 "is_configured": true, 00:16:40.047 "data_offset": 2048, 00:16:40.047 "data_size": 63488 00:16:40.047 }, 00:16:40.047 { 00:16:40.047 "name": "BaseBdev2", 00:16:40.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.047 "is_configured": false, 00:16:40.047 "data_offset": 0, 00:16:40.047 "data_size": 0 00:16:40.047 } 00:16:40.047 ] 00:16:40.047 }' 00:16:40.047 00:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.047 00:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.615 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:40.873 [2024-07-25 00:43:03.306395] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.873 [2024-07-25 00:43:03.306859] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:40.873 [2024-07-25 00:43:03.306972] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:40.873 [2024-07-25 00:43:03.307129] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:40.873 [2024-07-25 00:43:03.307489] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:40.873 [2024-07-25 00:43:03.307528] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:40.873 [2024-07-25 00:43:03.307760] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.873 BaseBdev2 00:16:40.873 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:40.873 00:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:40.873 00:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:40.873 00:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:40.873 00:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:40.873 00:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:40.874 00:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.874 00:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:41.132 [ 00:16:41.132 { 00:16:41.132 "name": "BaseBdev2", 00:16:41.132 "aliases": [ 00:16:41.132 "5575b84d-3e9a-4e63-b21e-5aa71e0abdac" 00:16:41.132 ], 00:16:41.132 "product_name": "Malloc disk", 00:16:41.132 "block_size": 512, 00:16:41.132 "num_blocks": 65536, 00:16:41.132 "uuid": "5575b84d-3e9a-4e63-b21e-5aa71e0abdac", 00:16:41.132 "assigned_rate_limits": { 00:16:41.132 "rw_ios_per_sec": 0, 00:16:41.132 "rw_mbytes_per_sec": 0, 00:16:41.132 "r_mbytes_per_sec": 0, 00:16:41.132 "w_mbytes_per_sec": 0 00:16:41.132 }, 00:16:41.132 "claimed": true, 00:16:41.132 "claim_type": "exclusive_write", 00:16:41.132 "zoned": false, 00:16:41.132 "supported_io_types": { 00:16:41.132 "read": true, 00:16:41.132 "write": true, 00:16:41.132 "unmap": true, 00:16:41.132 "flush": true, 00:16:41.132 "reset": true, 00:16:41.132 "nvme_admin": false, 00:16:41.132 "nvme_io": false, 00:16:41.132 "nvme_io_md": false, 00:16:41.132 "write_zeroes": true, 00:16:41.132 "zcopy": true, 00:16:41.132 "get_zone_info": false, 00:16:41.132 "zone_management": false, 00:16:41.132 "zone_append": false, 00:16:41.132 "compare": false, 00:16:41.132 "compare_and_write": false, 00:16:41.132 "abort": true, 00:16:41.132 "seek_hole": false, 00:16:41.132 "seek_data": false, 00:16:41.132 "copy": true, 00:16:41.132 "nvme_iov_md": false 00:16:41.132 }, 00:16:41.132 "memory_domains": [ 00:16:41.132 { 00:16:41.132 "dma_device_id": "system", 00:16:41.132 "dma_device_type": 1 00:16:41.132 }, 00:16:41.132 { 00:16:41.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.132 "dma_device_type": 2 00:16:41.132 } 00:16:41.132 ], 00:16:41.132 "driver_specific": {} 00:16:41.132 } 00:16:41.132 ] 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.132 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.390 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.390 "name": "Existed_Raid", 00:16:41.390 "uuid": "dcb38936-9522-4a9c-8aef-14273d015478", 00:16:41.390 "strip_size_kb": 64, 00:16:41.390 "state": "online", 00:16:41.390 "raid_level": "raid0", 00:16:41.390 "superblock": true, 00:16:41.390 "num_base_bdevs": 2, 00:16:41.390 "num_base_bdevs_discovered": 2, 00:16:41.390 "num_base_bdevs_operational": 2, 00:16:41.390 "base_bdevs_list": [ 00:16:41.390 { 00:16:41.390 "name": "BaseBdev1", 00:16:41.390 "uuid": "ab0f1442-7f70-4c09-b48b-31220a9818f7", 00:16:41.390 "is_configured": true, 00:16:41.390 "data_offset": 2048, 00:16:41.390 "data_size": 63488 00:16:41.390 }, 00:16:41.390 { 00:16:41.390 "name": "BaseBdev2", 00:16:41.390 "uuid": "5575b84d-3e9a-4e63-b21e-5aa71e0abdac", 00:16:41.390 "is_configured": true, 00:16:41.390 "data_offset": 2048, 00:16:41.390 "data_size": 63488 00:16:41.390 } 00:16:41.390 ] 00:16:41.390 }' 00:16:41.390 00:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.390 00:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.957 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:41.957 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:41.957 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:41.957 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:41.957 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:41.957 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:41.957 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:41.957 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:42.215 [2024-07-25 00:43:04.655016] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.215 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:42.215 "name": "Existed_Raid", 00:16:42.215 "aliases": [ 00:16:42.215 "dcb38936-9522-4a9c-8aef-14273d015478" 00:16:42.215 ], 00:16:42.215 "product_name": "Raid Volume", 00:16:42.215 "block_size": 512, 00:16:42.215 "num_blocks": 126976, 00:16:42.215 "uuid": "dcb38936-9522-4a9c-8aef-14273d015478", 00:16:42.215 "assigned_rate_limits": { 00:16:42.215 "rw_ios_per_sec": 0, 00:16:42.215 "rw_mbytes_per_sec": 0, 00:16:42.215 "r_mbytes_per_sec": 0, 00:16:42.215 "w_mbytes_per_sec": 0 00:16:42.215 }, 00:16:42.215 "claimed": false, 00:16:42.215 "zoned": false, 00:16:42.215 "supported_io_types": { 00:16:42.215 "read": true, 00:16:42.215 "write": true, 00:16:42.215 "unmap": true, 00:16:42.215 "flush": true, 00:16:42.215 "reset": true, 00:16:42.215 "nvme_admin": false, 00:16:42.215 "nvme_io": false, 00:16:42.215 "nvme_io_md": false, 00:16:42.215 "write_zeroes": true, 00:16:42.215 "zcopy": false, 00:16:42.215 "get_zone_info": false, 00:16:42.215 "zone_management": false, 00:16:42.215 "zone_append": false, 00:16:42.215 "compare": false, 00:16:42.215 "compare_and_write": false, 00:16:42.215 "abort": false, 00:16:42.215 "seek_hole": false, 00:16:42.215 "seek_data": false, 00:16:42.215 "copy": false, 00:16:42.215 "nvme_iov_md": false 00:16:42.215 }, 00:16:42.215 "memory_domains": [ 00:16:42.215 { 00:16:42.215 "dma_device_id": "system", 00:16:42.215 "dma_device_type": 1 00:16:42.215 }, 00:16:42.215 { 00:16:42.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.215 "dma_device_type": 2 00:16:42.215 }, 00:16:42.215 { 00:16:42.215 "dma_device_id": "system", 00:16:42.215 "dma_device_type": 1 00:16:42.215 }, 00:16:42.215 { 00:16:42.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.215 "dma_device_type": 2 00:16:42.215 } 00:16:42.215 ], 00:16:42.215 "driver_specific": { 00:16:42.215 "raid": { 00:16:42.215 "uuid": "dcb38936-9522-4a9c-8aef-14273d015478", 00:16:42.215 "strip_size_kb": 64, 00:16:42.215 "state": "online", 00:16:42.215 "raid_level": "raid0", 00:16:42.215 "superblock": true, 00:16:42.215 "num_base_bdevs": 2, 00:16:42.215 "num_base_bdevs_discovered": 2, 00:16:42.215 "num_base_bdevs_operational": 2, 00:16:42.215 "base_bdevs_list": [ 00:16:42.215 { 00:16:42.215 "name": "BaseBdev1", 00:16:42.215 "uuid": "ab0f1442-7f70-4c09-b48b-31220a9818f7", 00:16:42.215 "is_configured": true, 00:16:42.215 "data_offset": 2048, 00:16:42.215 "data_size": 63488 00:16:42.215 }, 00:16:42.215 { 00:16:42.215 "name": "BaseBdev2", 00:16:42.215 "uuid": "5575b84d-3e9a-4e63-b21e-5aa71e0abdac", 00:16:42.215 "is_configured": true, 00:16:42.215 "data_offset": 2048, 00:16:42.215 "data_size": 63488 00:16:42.215 } 00:16:42.215 ] 00:16:42.215 } 00:16:42.215 } 00:16:42.215 }' 00:16:42.215 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.215 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:42.216 BaseBdev2' 00:16:42.216 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:42.216 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:42.216 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:42.474 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:42.474 "name": "BaseBdev1", 00:16:42.474 "aliases": [ 00:16:42.474 "ab0f1442-7f70-4c09-b48b-31220a9818f7" 00:16:42.474 ], 00:16:42.474 "product_name": "Malloc disk", 00:16:42.474 "block_size": 512, 00:16:42.474 "num_blocks": 65536, 00:16:42.474 "uuid": "ab0f1442-7f70-4c09-b48b-31220a9818f7", 00:16:42.474 "assigned_rate_limits": { 00:16:42.474 "rw_ios_per_sec": 0, 00:16:42.474 "rw_mbytes_per_sec": 0, 00:16:42.474 "r_mbytes_per_sec": 0, 00:16:42.474 "w_mbytes_per_sec": 0 00:16:42.474 }, 00:16:42.474 "claimed": true, 00:16:42.474 "claim_type": "exclusive_write", 00:16:42.474 "zoned": false, 00:16:42.474 "supported_io_types": { 00:16:42.474 "read": true, 00:16:42.474 "write": true, 00:16:42.474 "unmap": true, 00:16:42.474 "flush": true, 00:16:42.474 "reset": true, 00:16:42.474 "nvme_admin": false, 00:16:42.474 "nvme_io": false, 00:16:42.474 "nvme_io_md": false, 00:16:42.474 "write_zeroes": true, 00:16:42.474 "zcopy": true, 00:16:42.474 "get_zone_info": false, 00:16:42.474 "zone_management": false, 00:16:42.474 "zone_append": false, 00:16:42.474 "compare": false, 00:16:42.474 "compare_and_write": false, 00:16:42.474 "abort": true, 00:16:42.474 "seek_hole": false, 00:16:42.474 "seek_data": false, 00:16:42.474 "copy": true, 00:16:42.474 "nvme_iov_md": false 00:16:42.474 }, 00:16:42.474 "memory_domains": [ 00:16:42.474 { 00:16:42.474 "dma_device_id": "system", 00:16:42.474 "dma_device_type": 1 00:16:42.474 }, 00:16:42.474 { 00:16:42.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.474 "dma_device_type": 2 00:16:42.474 } 00:16:42.474 ], 00:16:42.474 "driver_specific": {} 00:16:42.474 }' 00:16:42.474 00:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:42.474 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:42.474 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:42.474 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:42.474 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:42.734 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:42.734 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:42.734 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:42.734 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:42.734 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:42.734 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:42.734 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:42.734 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:42.734 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:42.734 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:42.993 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:42.993 "name": "BaseBdev2", 00:16:42.993 "aliases": [ 00:16:42.993 "5575b84d-3e9a-4e63-b21e-5aa71e0abdac" 00:16:42.993 ], 00:16:42.993 "product_name": "Malloc disk", 00:16:42.993 "block_size": 512, 00:16:42.993 "num_blocks": 65536, 00:16:42.993 "uuid": "5575b84d-3e9a-4e63-b21e-5aa71e0abdac", 00:16:42.993 "assigned_rate_limits": { 00:16:42.993 "rw_ios_per_sec": 0, 00:16:42.993 "rw_mbytes_per_sec": 0, 00:16:42.993 "r_mbytes_per_sec": 0, 00:16:42.993 "w_mbytes_per_sec": 0 00:16:42.993 }, 00:16:42.993 "claimed": true, 00:16:42.993 "claim_type": "exclusive_write", 00:16:42.993 "zoned": false, 00:16:42.993 "supported_io_types": { 00:16:42.993 "read": true, 00:16:42.993 "write": true, 00:16:42.993 "unmap": true, 00:16:42.993 "flush": true, 00:16:42.993 "reset": true, 00:16:42.993 "nvme_admin": false, 00:16:42.993 "nvme_io": false, 00:16:42.993 "nvme_io_md": false, 00:16:42.993 "write_zeroes": true, 00:16:42.993 "zcopy": true, 00:16:42.993 "get_zone_info": false, 00:16:42.993 "zone_management": false, 00:16:42.993 "zone_append": false, 00:16:42.993 "compare": false, 00:16:42.993 "compare_and_write": false, 00:16:42.993 "abort": true, 00:16:42.993 "seek_hole": false, 00:16:42.993 "seek_data": false, 00:16:42.993 "copy": true, 00:16:42.993 "nvme_iov_md": false 00:16:42.993 }, 00:16:42.993 "memory_domains": [ 00:16:42.993 { 00:16:42.993 "dma_device_id": "system", 00:16:42.993 "dma_device_type": 1 00:16:42.993 }, 00:16:42.993 { 00:16:42.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.993 "dma_device_type": 2 00:16:42.993 } 00:16:42.993 ], 00:16:42.993 "driver_specific": {} 00:16:42.993 }' 00:16:42.993 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:42.993 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.252 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.252 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.252 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.252 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.252 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.252 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.252 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:43.252 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.252 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.511 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:43.511 00:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:43.511 [2024-07-25 00:43:06.163167] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:43.511 [2024-07-25 00:43:06.163207] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.511 [2024-07-25 00:43:06.163258] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.772 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.030 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.030 "name": "Existed_Raid", 00:16:44.030 "uuid": "dcb38936-9522-4a9c-8aef-14273d015478", 00:16:44.030 "strip_size_kb": 64, 00:16:44.030 "state": "offline", 00:16:44.030 "raid_level": "raid0", 00:16:44.030 "superblock": true, 00:16:44.030 "num_base_bdevs": 2, 00:16:44.030 "num_base_bdevs_discovered": 1, 00:16:44.030 "num_base_bdevs_operational": 1, 00:16:44.030 "base_bdevs_list": [ 00:16:44.030 { 00:16:44.030 "name": null, 00:16:44.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.030 "is_configured": false, 00:16:44.030 "data_offset": 2048, 00:16:44.030 "data_size": 63488 00:16:44.030 }, 00:16:44.030 { 00:16:44.030 "name": "BaseBdev2", 00:16:44.030 "uuid": "5575b84d-3e9a-4e63-b21e-5aa71e0abdac", 00:16:44.030 "is_configured": true, 00:16:44.030 "data_offset": 2048, 00:16:44.030 "data_size": 63488 00:16:44.030 } 00:16:44.030 ] 00:16:44.030 }' 00:16:44.030 00:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.030 00:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.596 00:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:44.596 00:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:44.596 00:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.596 00:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:44.854 00:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:44.854 00:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:44.855 00:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:45.114 [2024-07-25 00:43:07.629943] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:45.114 [2024-07-25 00:43:07.630143] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:45.114 00:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:45.114 00:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:45.114 00:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.114 00:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 121956 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 121956 ']' 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 121956 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121956 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121956' 00:16:45.408 killing process with pid 121956 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 121956 00:16:45.408 [2024-07-25 00:43:08.029636] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:45.408 00:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 121956 00:16:45.408 [2024-07-25 00:43:08.029913] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.806 ************************************ 00:16:46.806 END TEST raid_state_function_test_sb 00:16:46.806 ************************************ 00:16:46.806 00:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:46.806 00:16:46.806 real 0m11.296s 00:16:46.806 user 0m19.130s 00:16:46.806 sys 0m1.631s 00:16:46.806 00:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.806 00:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.065 00:43:09 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:16:47.065 00:43:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:47.065 00:43:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.065 00:43:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.065 ************************************ 00:16:47.065 START TEST raid_superblock_test 00:16:47.065 ************************************ 00:16:47.065 00:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:16:47.065 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:16:47.065 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:47.065 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:47.065 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:47.065 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:47.065 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:47.065 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=122331 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 122331 /var/tmp/spdk-raid.sock 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 122331 ']' 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:47.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.066 00:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.066 [2024-07-25 00:43:09.566569] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:16:47.066 [2024-07-25 00:43:09.567629] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122331 ] 00:16:47.325 [2024-07-25 00:43:09.748068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.325 [2024-07-25 00:43:09.946062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.583 [2024-07-25 00:43:10.146198] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:48.150 malloc1 00:16:48.150 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.409 [2024-07-25 00:43:10.959703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.409 [2024-07-25 00:43:10.960044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.409 [2024-07-25 00:43:10.960119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:48.409 [2024-07-25 00:43:10.960313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.409 [2024-07-25 00:43:10.962623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.409 [2024-07-25 00:43:10.962795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.409 pt1 00:16:48.409 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:48.409 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:48.409 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:48.409 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:48.409 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:48.409 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.409 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.409 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.409 00:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:48.667 malloc2 00:16:48.667 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.966 [2024-07-25 00:43:11.408764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.966 [2024-07-25 00:43:11.409072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.966 [2024-07-25 00:43:11.409142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:48.966 [2024-07-25 00:43:11.409248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.966 [2024-07-25 00:43:11.411505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.966 [2024-07-25 00:43:11.411656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.966 pt2 00:16:48.966 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:48.966 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:48.967 [2024-07-25 00:43:11.596835] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.967 [2024-07-25 00:43:11.599021] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.967 [2024-07-25 00:43:11.599304] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:16:48.967 [2024-07-25 00:43:11.599414] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:48.967 [2024-07-25 00:43:11.599571] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:48.967 [2024-07-25 00:43:11.599931] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:16:48.967 [2024-07-25 00:43:11.600037] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:16:48.967 [2024-07-25 00:43:11.600269] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.967 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.225 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.225 "name": "raid_bdev1", 00:16:49.225 "uuid": "b477d264-1f9c-4555-aaa0-3221f621a857", 00:16:49.225 "strip_size_kb": 64, 00:16:49.225 "state": "online", 00:16:49.225 "raid_level": "raid0", 00:16:49.226 "superblock": true, 00:16:49.226 "num_base_bdevs": 2, 00:16:49.226 "num_base_bdevs_discovered": 2, 00:16:49.226 "num_base_bdevs_operational": 2, 00:16:49.226 "base_bdevs_list": [ 00:16:49.226 { 00:16:49.226 "name": "pt1", 00:16:49.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.226 "is_configured": true, 00:16:49.226 "data_offset": 2048, 00:16:49.226 "data_size": 63488 00:16:49.226 }, 00:16:49.226 { 00:16:49.226 "name": "pt2", 00:16:49.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.226 "is_configured": true, 00:16:49.226 "data_offset": 2048, 00:16:49.226 "data_size": 63488 00:16:49.226 } 00:16:49.226 ] 00:16:49.226 }' 00:16:49.226 00:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.226 00:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.793 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:49.793 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:49.793 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:49.793 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:49.793 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:49.793 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:49.793 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:49.793 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:50.052 [2024-07-25 00:43:12.589217] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.052 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:50.052 "name": "raid_bdev1", 00:16:50.052 "aliases": [ 00:16:50.052 "b477d264-1f9c-4555-aaa0-3221f621a857" 00:16:50.052 ], 00:16:50.052 "product_name": "Raid Volume", 00:16:50.052 "block_size": 512, 00:16:50.052 "num_blocks": 126976, 00:16:50.052 "uuid": "b477d264-1f9c-4555-aaa0-3221f621a857", 00:16:50.052 "assigned_rate_limits": { 00:16:50.052 "rw_ios_per_sec": 0, 00:16:50.052 "rw_mbytes_per_sec": 0, 00:16:50.052 "r_mbytes_per_sec": 0, 00:16:50.052 "w_mbytes_per_sec": 0 00:16:50.052 }, 00:16:50.052 "claimed": false, 00:16:50.052 "zoned": false, 00:16:50.052 "supported_io_types": { 00:16:50.052 "read": true, 00:16:50.052 "write": true, 00:16:50.052 "unmap": true, 00:16:50.052 "flush": true, 00:16:50.052 "reset": true, 00:16:50.052 "nvme_admin": false, 00:16:50.052 "nvme_io": false, 00:16:50.052 "nvme_io_md": false, 00:16:50.052 "write_zeroes": true, 00:16:50.052 "zcopy": false, 00:16:50.052 "get_zone_info": false, 00:16:50.052 "zone_management": false, 00:16:50.052 "zone_append": false, 00:16:50.052 "compare": false, 00:16:50.052 "compare_and_write": false, 00:16:50.052 "abort": false, 00:16:50.052 "seek_hole": false, 00:16:50.052 "seek_data": false, 00:16:50.052 "copy": false, 00:16:50.052 "nvme_iov_md": false 00:16:50.052 }, 00:16:50.052 "memory_domains": [ 00:16:50.052 { 00:16:50.052 "dma_device_id": "system", 00:16:50.052 "dma_device_type": 1 00:16:50.052 }, 00:16:50.052 { 00:16:50.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.052 "dma_device_type": 2 00:16:50.052 }, 00:16:50.052 { 00:16:50.052 "dma_device_id": "system", 00:16:50.052 "dma_device_type": 1 00:16:50.052 }, 00:16:50.052 { 00:16:50.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.052 "dma_device_type": 2 00:16:50.052 } 00:16:50.052 ], 00:16:50.052 "driver_specific": { 00:16:50.052 "raid": { 00:16:50.052 "uuid": "b477d264-1f9c-4555-aaa0-3221f621a857", 00:16:50.052 "strip_size_kb": 64, 00:16:50.052 "state": "online", 00:16:50.052 "raid_level": "raid0", 00:16:50.052 "superblock": true, 00:16:50.052 "num_base_bdevs": 2, 00:16:50.052 "num_base_bdevs_discovered": 2, 00:16:50.052 "num_base_bdevs_operational": 2, 00:16:50.052 "base_bdevs_list": [ 00:16:50.052 { 00:16:50.052 "name": "pt1", 00:16:50.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.052 "is_configured": true, 00:16:50.052 "data_offset": 2048, 00:16:50.052 "data_size": 63488 00:16:50.052 }, 00:16:50.052 { 00:16:50.052 "name": "pt2", 00:16:50.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.052 "is_configured": true, 00:16:50.052 "data_offset": 2048, 00:16:50.052 "data_size": 63488 00:16:50.052 } 00:16:50.052 ] 00:16:50.052 } 00:16:50.052 } 00:16:50.052 }' 00:16:50.052 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.052 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:50.052 pt2' 00:16:50.052 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:50.053 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:50.053 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:50.312 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:50.312 "name": "pt1", 00:16:50.312 "aliases": [ 00:16:50.312 "00000000-0000-0000-0000-000000000001" 00:16:50.312 ], 00:16:50.312 "product_name": "passthru", 00:16:50.312 "block_size": 512, 00:16:50.312 "num_blocks": 65536, 00:16:50.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.312 "assigned_rate_limits": { 00:16:50.312 "rw_ios_per_sec": 0, 00:16:50.312 "rw_mbytes_per_sec": 0, 00:16:50.312 "r_mbytes_per_sec": 0, 00:16:50.312 "w_mbytes_per_sec": 0 00:16:50.312 }, 00:16:50.312 "claimed": true, 00:16:50.312 "claim_type": "exclusive_write", 00:16:50.312 "zoned": false, 00:16:50.312 "supported_io_types": { 00:16:50.312 "read": true, 00:16:50.312 "write": true, 00:16:50.312 "unmap": true, 00:16:50.312 "flush": true, 00:16:50.312 "reset": true, 00:16:50.312 "nvme_admin": false, 00:16:50.312 "nvme_io": false, 00:16:50.312 "nvme_io_md": false, 00:16:50.312 "write_zeroes": true, 00:16:50.312 "zcopy": true, 00:16:50.312 "get_zone_info": false, 00:16:50.312 "zone_management": false, 00:16:50.312 "zone_append": false, 00:16:50.312 "compare": false, 00:16:50.312 "compare_and_write": false, 00:16:50.312 "abort": true, 00:16:50.312 "seek_hole": false, 00:16:50.312 "seek_data": false, 00:16:50.312 "copy": true, 00:16:50.312 "nvme_iov_md": false 00:16:50.312 }, 00:16:50.312 "memory_domains": [ 00:16:50.312 { 00:16:50.312 "dma_device_id": "system", 00:16:50.312 "dma_device_type": 1 00:16:50.312 }, 00:16:50.312 { 00:16:50.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.312 "dma_device_type": 2 00:16:50.312 } 00:16:50.312 ], 00:16:50.312 "driver_specific": { 00:16:50.312 "passthru": { 00:16:50.312 "name": "pt1", 00:16:50.312 "base_bdev_name": "malloc1" 00:16:50.312 } 00:16:50.312 } 00:16:50.312 }' 00:16:50.312 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.312 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.312 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:50.312 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.312 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.571 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:50.571 00:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.571 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.571 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.571 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.571 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.571 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.571 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:50.571 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:50.571 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:50.830 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:50.830 "name": "pt2", 00:16:50.830 "aliases": [ 00:16:50.830 "00000000-0000-0000-0000-000000000002" 00:16:50.830 ], 00:16:50.830 "product_name": "passthru", 00:16:50.830 "block_size": 512, 00:16:50.830 "num_blocks": 65536, 00:16:50.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.830 "assigned_rate_limits": { 00:16:50.830 "rw_ios_per_sec": 0, 00:16:50.830 "rw_mbytes_per_sec": 0, 00:16:50.830 "r_mbytes_per_sec": 0, 00:16:50.830 "w_mbytes_per_sec": 0 00:16:50.830 }, 00:16:50.830 "claimed": true, 00:16:50.830 "claim_type": "exclusive_write", 00:16:50.830 "zoned": false, 00:16:50.830 "supported_io_types": { 00:16:50.830 "read": true, 00:16:50.830 "write": true, 00:16:50.830 "unmap": true, 00:16:50.830 "flush": true, 00:16:50.830 "reset": true, 00:16:50.830 "nvme_admin": false, 00:16:50.830 "nvme_io": false, 00:16:50.830 "nvme_io_md": false, 00:16:50.830 "write_zeroes": true, 00:16:50.830 "zcopy": true, 00:16:50.830 "get_zone_info": false, 00:16:50.830 "zone_management": false, 00:16:50.830 "zone_append": false, 00:16:50.830 "compare": false, 00:16:50.830 "compare_and_write": false, 00:16:50.830 "abort": true, 00:16:50.830 "seek_hole": false, 00:16:50.830 "seek_data": false, 00:16:50.830 "copy": true, 00:16:50.830 "nvme_iov_md": false 00:16:50.830 }, 00:16:50.830 "memory_domains": [ 00:16:50.830 { 00:16:50.830 "dma_device_id": "system", 00:16:50.830 "dma_device_type": 1 00:16:50.830 }, 00:16:50.830 { 00:16:50.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.830 "dma_device_type": 2 00:16:50.830 } 00:16:50.830 ], 00:16:50.830 "driver_specific": { 00:16:50.830 "passthru": { 00:16:50.830 "name": "pt2", 00:16:50.830 "base_bdev_name": "malloc2" 00:16:50.830 } 00:16:50.830 } 00:16:50.830 }' 00:16:50.830 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.089 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.089 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:51.089 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.089 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.089 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:51.089 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.089 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.089 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:51.089 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.346 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.346 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.346 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:51.346 00:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:51.604 [2024-07-25 00:43:14.065483] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.604 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b477d264-1f9c-4555-aaa0-3221f621a857 00:16:51.604 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z b477d264-1f9c-4555-aaa0-3221f621a857 ']' 00:16:51.604 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:51.604 [2024-07-25 00:43:14.249260] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.604 [2024-07-25 00:43:14.249519] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.604 [2024-07-25 00:43:14.249697] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.604 [2024-07-25 00:43:14.249786] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.604 [2024-07-25 00:43:14.249924] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:51.862 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.862 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:52.121 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:52.121 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:52.121 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.121 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:52.379 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.379 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:52.379 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:52.379 00:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:52.638 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:52.906 [2024-07-25 00:43:15.325471] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:52.906 [2024-07-25 00:43:15.327615] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:52.906 [2024-07-25 00:43:15.327803] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:52.906 [2024-07-25 00:43:15.327965] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:52.906 [2024-07-25 00:43:15.328063] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.906 [2024-07-25 00:43:15.328097] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:16:52.906 request: 00:16:52.906 { 00:16:52.906 "name": "raid_bdev1", 00:16:52.906 "raid_level": "raid0", 00:16:52.906 "base_bdevs": [ 00:16:52.906 "malloc1", 00:16:52.906 "malloc2" 00:16:52.906 ], 00:16:52.906 "strip_size_kb": 64, 00:16:52.906 "superblock": false, 00:16:52.906 "method": "bdev_raid_create", 00:16:52.906 "req_id": 1 00:16:52.906 } 00:16:52.906 Got JSON-RPC error response 00:16:52.906 response: 00:16:52.906 { 00:16:52.906 "code": -17, 00:16:52.906 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:52.906 } 00:16:52.906 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:52.906 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:52.906 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:52.906 00:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:52.906 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.906 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:52.906 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:52.906 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:52.906 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:53.180 [2024-07-25 00:43:15.797518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:53.180 [2024-07-25 00:43:15.797786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.180 [2024-07-25 00:43:15.797846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:53.180 [2024-07-25 00:43:15.797948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.180 [2024-07-25 00:43:15.800226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.180 [2024-07-25 00:43:15.800392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:53.180 [2024-07-25 00:43:15.800577] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:53.180 [2024-07-25 00:43:15.800720] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:53.180 pt1 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.180 00:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.439 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:53.439 "name": "raid_bdev1", 00:16:53.439 "uuid": "b477d264-1f9c-4555-aaa0-3221f621a857", 00:16:53.439 "strip_size_kb": 64, 00:16:53.439 "state": "configuring", 00:16:53.439 "raid_level": "raid0", 00:16:53.439 "superblock": true, 00:16:53.439 "num_base_bdevs": 2, 00:16:53.439 "num_base_bdevs_discovered": 1, 00:16:53.439 "num_base_bdevs_operational": 2, 00:16:53.439 "base_bdevs_list": [ 00:16:53.439 { 00:16:53.439 "name": "pt1", 00:16:53.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:53.439 "is_configured": true, 00:16:53.439 "data_offset": 2048, 00:16:53.439 "data_size": 63488 00:16:53.439 }, 00:16:53.439 { 00:16:53.439 "name": null, 00:16:53.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.439 "is_configured": false, 00:16:53.439 "data_offset": 2048, 00:16:53.439 "data_size": 63488 00:16:53.439 } 00:16:53.439 ] 00:16:53.439 }' 00:16:53.439 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:53.439 00:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.007 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:54.007 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:54.007 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:54.008 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:54.267 [2024-07-25 00:43:16.765695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:54.267 [2024-07-25 00:43:16.765933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.267 [2024-07-25 00:43:16.765997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:54.267 [2024-07-25 00:43:16.766088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.267 [2024-07-25 00:43:16.766580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.267 [2024-07-25 00:43:16.766742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:54.267 [2024-07-25 00:43:16.766916] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:54.267 [2024-07-25 00:43:16.766965] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.267 [2024-07-25 00:43:16.767144] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:16:54.267 [2024-07-25 00:43:16.767181] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:54.267 [2024-07-25 00:43:16.767392] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:54.267 [2024-07-25 00:43:16.767747] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:16:54.267 [2024-07-25 00:43:16.767855] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:16:54.267 [2024-07-25 00:43:16.768045] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.267 pt2 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.267 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.526 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.526 "name": "raid_bdev1", 00:16:54.526 "uuid": "b477d264-1f9c-4555-aaa0-3221f621a857", 00:16:54.526 "strip_size_kb": 64, 00:16:54.526 "state": "online", 00:16:54.526 "raid_level": "raid0", 00:16:54.526 "superblock": true, 00:16:54.526 "num_base_bdevs": 2, 00:16:54.526 "num_base_bdevs_discovered": 2, 00:16:54.526 "num_base_bdevs_operational": 2, 00:16:54.526 "base_bdevs_list": [ 00:16:54.526 { 00:16:54.526 "name": "pt1", 00:16:54.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.526 "is_configured": true, 00:16:54.526 "data_offset": 2048, 00:16:54.526 "data_size": 63488 00:16:54.526 }, 00:16:54.526 { 00:16:54.526 "name": "pt2", 00:16:54.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.526 "is_configured": true, 00:16:54.526 "data_offset": 2048, 00:16:54.526 "data_size": 63488 00:16:54.526 } 00:16:54.526 ] 00:16:54.526 }' 00:16:54.526 00:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.526 00:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.095 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:55.095 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:55.095 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:55.095 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:55.095 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:55.095 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:55.095 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:55.095 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:55.095 [2024-07-25 00:43:17.739173] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.355 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:55.355 "name": "raid_bdev1", 00:16:55.355 "aliases": [ 00:16:55.355 "b477d264-1f9c-4555-aaa0-3221f621a857" 00:16:55.355 ], 00:16:55.355 "product_name": "Raid Volume", 00:16:55.355 "block_size": 512, 00:16:55.355 "num_blocks": 126976, 00:16:55.355 "uuid": "b477d264-1f9c-4555-aaa0-3221f621a857", 00:16:55.355 "assigned_rate_limits": { 00:16:55.355 "rw_ios_per_sec": 0, 00:16:55.355 "rw_mbytes_per_sec": 0, 00:16:55.355 "r_mbytes_per_sec": 0, 00:16:55.355 "w_mbytes_per_sec": 0 00:16:55.355 }, 00:16:55.355 "claimed": false, 00:16:55.355 "zoned": false, 00:16:55.355 "supported_io_types": { 00:16:55.355 "read": true, 00:16:55.355 "write": true, 00:16:55.355 "unmap": true, 00:16:55.355 "flush": true, 00:16:55.355 "reset": true, 00:16:55.355 "nvme_admin": false, 00:16:55.355 "nvme_io": false, 00:16:55.355 "nvme_io_md": false, 00:16:55.355 "write_zeroes": true, 00:16:55.355 "zcopy": false, 00:16:55.355 "get_zone_info": false, 00:16:55.355 "zone_management": false, 00:16:55.355 "zone_append": false, 00:16:55.355 "compare": false, 00:16:55.355 "compare_and_write": false, 00:16:55.355 "abort": false, 00:16:55.355 "seek_hole": false, 00:16:55.355 "seek_data": false, 00:16:55.355 "copy": false, 00:16:55.355 "nvme_iov_md": false 00:16:55.355 }, 00:16:55.355 "memory_domains": [ 00:16:55.355 { 00:16:55.355 "dma_device_id": "system", 00:16:55.355 "dma_device_type": 1 00:16:55.355 }, 00:16:55.355 { 00:16:55.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.355 "dma_device_type": 2 00:16:55.355 }, 00:16:55.355 { 00:16:55.355 "dma_device_id": "system", 00:16:55.355 "dma_device_type": 1 00:16:55.355 }, 00:16:55.355 { 00:16:55.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.355 "dma_device_type": 2 00:16:55.355 } 00:16:55.355 ], 00:16:55.355 "driver_specific": { 00:16:55.355 "raid": { 00:16:55.355 "uuid": "b477d264-1f9c-4555-aaa0-3221f621a857", 00:16:55.355 "strip_size_kb": 64, 00:16:55.355 "state": "online", 00:16:55.355 "raid_level": "raid0", 00:16:55.355 "superblock": true, 00:16:55.355 "num_base_bdevs": 2, 00:16:55.355 "num_base_bdevs_discovered": 2, 00:16:55.355 "num_base_bdevs_operational": 2, 00:16:55.355 "base_bdevs_list": [ 00:16:55.355 { 00:16:55.355 "name": "pt1", 00:16:55.355 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.355 "is_configured": true, 00:16:55.355 "data_offset": 2048, 00:16:55.355 "data_size": 63488 00:16:55.355 }, 00:16:55.355 { 00:16:55.355 "name": "pt2", 00:16:55.355 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.355 "is_configured": true, 00:16:55.355 "data_offset": 2048, 00:16:55.355 "data_size": 63488 00:16:55.355 } 00:16:55.355 ] 00:16:55.355 } 00:16:55.355 } 00:16:55.355 }' 00:16:55.355 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:55.355 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:55.355 pt2' 00:16:55.355 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:55.355 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:55.355 00:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:55.615 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:55.615 "name": "pt1", 00:16:55.615 "aliases": [ 00:16:55.615 "00000000-0000-0000-0000-000000000001" 00:16:55.615 ], 00:16:55.615 "product_name": "passthru", 00:16:55.615 "block_size": 512, 00:16:55.615 "num_blocks": 65536, 00:16:55.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.615 "assigned_rate_limits": { 00:16:55.615 "rw_ios_per_sec": 0, 00:16:55.615 "rw_mbytes_per_sec": 0, 00:16:55.615 "r_mbytes_per_sec": 0, 00:16:55.615 "w_mbytes_per_sec": 0 00:16:55.615 }, 00:16:55.615 "claimed": true, 00:16:55.615 "claim_type": "exclusive_write", 00:16:55.615 "zoned": false, 00:16:55.615 "supported_io_types": { 00:16:55.615 "read": true, 00:16:55.615 "write": true, 00:16:55.615 "unmap": true, 00:16:55.615 "flush": true, 00:16:55.615 "reset": true, 00:16:55.615 "nvme_admin": false, 00:16:55.615 "nvme_io": false, 00:16:55.615 "nvme_io_md": false, 00:16:55.615 "write_zeroes": true, 00:16:55.615 "zcopy": true, 00:16:55.615 "get_zone_info": false, 00:16:55.615 "zone_management": false, 00:16:55.615 "zone_append": false, 00:16:55.615 "compare": false, 00:16:55.615 "compare_and_write": false, 00:16:55.615 "abort": true, 00:16:55.615 "seek_hole": false, 00:16:55.615 "seek_data": false, 00:16:55.615 "copy": true, 00:16:55.615 "nvme_iov_md": false 00:16:55.615 }, 00:16:55.615 "memory_domains": [ 00:16:55.615 { 00:16:55.615 "dma_device_id": "system", 00:16:55.615 "dma_device_type": 1 00:16:55.615 }, 00:16:55.615 { 00:16:55.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.615 "dma_device_type": 2 00:16:55.615 } 00:16:55.615 ], 00:16:55.615 "driver_specific": { 00:16:55.615 "passthru": { 00:16:55.615 "name": "pt1", 00:16:55.615 "base_bdev_name": "malloc1" 00:16:55.615 } 00:16:55.615 } 00:16:55.615 }' 00:16:55.615 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.615 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.615 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:55.615 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.615 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.875 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:55.875 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.875 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.875 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:55.875 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.875 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.875 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:55.875 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:55.875 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:55.875 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:56.134 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:56.134 "name": "pt2", 00:16:56.134 "aliases": [ 00:16:56.134 "00000000-0000-0000-0000-000000000002" 00:16:56.134 ], 00:16:56.134 "product_name": "passthru", 00:16:56.134 "block_size": 512, 00:16:56.134 "num_blocks": 65536, 00:16:56.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.134 "assigned_rate_limits": { 00:16:56.134 "rw_ios_per_sec": 0, 00:16:56.134 "rw_mbytes_per_sec": 0, 00:16:56.134 "r_mbytes_per_sec": 0, 00:16:56.134 "w_mbytes_per_sec": 0 00:16:56.134 }, 00:16:56.134 "claimed": true, 00:16:56.134 "claim_type": "exclusive_write", 00:16:56.134 "zoned": false, 00:16:56.134 "supported_io_types": { 00:16:56.134 "read": true, 00:16:56.134 "write": true, 00:16:56.134 "unmap": true, 00:16:56.134 "flush": true, 00:16:56.135 "reset": true, 00:16:56.135 "nvme_admin": false, 00:16:56.135 "nvme_io": false, 00:16:56.135 "nvme_io_md": false, 00:16:56.135 "write_zeroes": true, 00:16:56.135 "zcopy": true, 00:16:56.135 "get_zone_info": false, 00:16:56.135 "zone_management": false, 00:16:56.135 "zone_append": false, 00:16:56.135 "compare": false, 00:16:56.135 "compare_and_write": false, 00:16:56.135 "abort": true, 00:16:56.135 "seek_hole": false, 00:16:56.135 "seek_data": false, 00:16:56.135 "copy": true, 00:16:56.135 "nvme_iov_md": false 00:16:56.135 }, 00:16:56.135 "memory_domains": [ 00:16:56.135 { 00:16:56.135 "dma_device_id": "system", 00:16:56.135 "dma_device_type": 1 00:16:56.135 }, 00:16:56.135 { 00:16:56.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.135 "dma_device_type": 2 00:16:56.135 } 00:16:56.135 ], 00:16:56.135 "driver_specific": { 00:16:56.135 "passthru": { 00:16:56.135 "name": "pt2", 00:16:56.135 "base_bdev_name": "malloc2" 00:16:56.135 } 00:16:56.135 } 00:16:56.135 }' 00:16:56.135 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.135 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.135 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:56.135 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:56.135 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:56.394 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:56.394 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.395 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.395 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:56.395 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.395 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.395 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:56.395 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:56.395 00:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:56.655 [2024-07-25 00:43:19.243115] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' b477d264-1f9c-4555-aaa0-3221f621a857 '!=' b477d264-1f9c-4555-aaa0-3221f621a857 ']' 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 122331 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 122331 ']' 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 122331 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122331 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122331' 00:16:56.655 killing process with pid 122331 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 122331 00:16:56.655 [2024-07-25 00:43:19.302642] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.655 00:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 122331 00:16:56.655 [2024-07-25 00:43:19.302864] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.655 [2024-07-25 00:43:19.303007] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.655 [2024-07-25 00:43:19.303085] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:16:56.914 [2024-07-25 00:43:19.507947] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:58.294 00:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:58.294 00:16:58.294 real 0m11.400s 00:16:58.294 user 0m19.239s 00:16:58.294 sys 0m1.708s 00:16:58.294 ************************************ 00:16:58.294 END TEST raid_superblock_test 00:16:58.294 ************************************ 00:16:58.294 00:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:58.294 00:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.294 00:43:20 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:16:58.294 00:43:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:58.294 00:43:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.294 00:43:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:58.554 ************************************ 00:16:58.554 START TEST raid_read_error_test 00:16:58.554 ************************************ 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Lr1nlG26gX 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=122700 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 122700 /var/tmp/spdk-raid.sock 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 122700 ']' 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:58.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.554 00:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.554 [2024-07-25 00:43:21.055249] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:16:58.554 [2024-07-25 00:43:21.055754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122700 ] 00:16:58.813 [2024-07-25 00:43:21.239471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.813 [2024-07-25 00:43:21.432059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.073 [2024-07-25 00:43:21.622797] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.641 00:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.641 00:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:59.641 00:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:59.641 00:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:59.641 BaseBdev1_malloc 00:16:59.641 00:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:59.900 true 00:16:59.900 00:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:00.185 [2024-07-25 00:43:22.648537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:00.185 [2024-07-25 00:43:22.648797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.185 [2024-07-25 00:43:22.648938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:00.185 [2024-07-25 00:43:22.649027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.185 [2024-07-25 00:43:22.651382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.185 [2024-07-25 00:43:22.651534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:00.185 BaseBdev1 00:17:00.185 00:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:00.185 00:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:00.455 BaseBdev2_malloc 00:17:00.455 00:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:00.714 true 00:17:00.714 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:00.973 [2024-07-25 00:43:23.402175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:00.973 [2024-07-25 00:43:23.402572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.973 [2024-07-25 00:43:23.402656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:00.973 [2024-07-25 00:43:23.402854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.973 [2024-07-25 00:43:23.405223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.973 [2024-07-25 00:43:23.405380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:00.973 BaseBdev2 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:00.973 [2024-07-25 00:43:23.582290] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.973 [2024-07-25 00:43:23.584450] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:00.973 [2024-07-25 00:43:23.584844] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:17:00.973 [2024-07-25 00:43:23.584958] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:00.973 [2024-07-25 00:43:23.585141] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:00.973 [2024-07-25 00:43:23.585538] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:17:00.973 [2024-07-25 00:43:23.585650] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:17:00.973 [2024-07-25 00:43:23.585870] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.973 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.232 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.232 "name": "raid_bdev1", 00:17:01.232 "uuid": "b0bbd4b2-7e18-49ad-a492-6e53a90374dd", 00:17:01.232 "strip_size_kb": 64, 00:17:01.232 "state": "online", 00:17:01.232 "raid_level": "raid0", 00:17:01.232 "superblock": true, 00:17:01.232 "num_base_bdevs": 2, 00:17:01.232 "num_base_bdevs_discovered": 2, 00:17:01.232 "num_base_bdevs_operational": 2, 00:17:01.232 "base_bdevs_list": [ 00:17:01.232 { 00:17:01.232 "name": "BaseBdev1", 00:17:01.232 "uuid": "c30eded1-7ed7-5175-9c44-877290d8ea8d", 00:17:01.232 "is_configured": true, 00:17:01.232 "data_offset": 2048, 00:17:01.232 "data_size": 63488 00:17:01.232 }, 00:17:01.232 { 00:17:01.232 "name": "BaseBdev2", 00:17:01.232 "uuid": "fe4b6aa8-8695-5a98-8c8d-0f714233fb68", 00:17:01.232 "is_configured": true, 00:17:01.232 "data_offset": 2048, 00:17:01.232 "data_size": 63488 00:17:01.232 } 00:17:01.232 ] 00:17:01.232 }' 00:17:01.232 00:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.232 00:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.800 00:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:01.800 00:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:01.800 [2024-07-25 00:43:24.383631] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:02.737 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:02.995 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:02.995 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:02.995 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:02.995 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:17:02.995 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:02.995 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:02.995 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:02.995 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.995 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:02.995 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.996 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.996 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.996 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.996 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.996 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.255 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.255 "name": "raid_bdev1", 00:17:03.255 "uuid": "b0bbd4b2-7e18-49ad-a492-6e53a90374dd", 00:17:03.255 "strip_size_kb": 64, 00:17:03.255 "state": "online", 00:17:03.255 "raid_level": "raid0", 00:17:03.255 "superblock": true, 00:17:03.255 "num_base_bdevs": 2, 00:17:03.255 "num_base_bdevs_discovered": 2, 00:17:03.255 "num_base_bdevs_operational": 2, 00:17:03.255 "base_bdevs_list": [ 00:17:03.255 { 00:17:03.255 "name": "BaseBdev1", 00:17:03.255 "uuid": "c30eded1-7ed7-5175-9c44-877290d8ea8d", 00:17:03.255 "is_configured": true, 00:17:03.255 "data_offset": 2048, 00:17:03.255 "data_size": 63488 00:17:03.255 }, 00:17:03.255 { 00:17:03.255 "name": "BaseBdev2", 00:17:03.255 "uuid": "fe4b6aa8-8695-5a98-8c8d-0f714233fb68", 00:17:03.255 "is_configured": true, 00:17:03.255 "data_offset": 2048, 00:17:03.255 "data_size": 63488 00:17:03.255 } 00:17:03.255 ] 00:17:03.255 }' 00:17:03.255 00:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.255 00:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.824 00:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:04.083 [2024-07-25 00:43:26.623593] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.083 [2024-07-25 00:43:26.623867] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.083 [2024-07-25 00:43:26.626544] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.083 [2024-07-25 00:43:26.626706] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.083 [2024-07-25 00:43:26.626769] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.083 [2024-07-25 00:43:26.626862] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:17:04.083 0 00:17:04.083 00:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 122700 00:17:04.083 00:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 122700 ']' 00:17:04.083 00:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 122700 00:17:04.083 00:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:04.083 00:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:04.083 00:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122700 00:17:04.083 00:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:04.083 00:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:04.083 00:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122700' 00:17:04.083 killing process with pid 122700 00:17:04.083 00:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 122700 00:17:04.083 00:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 122700 00:17:04.083 [2024-07-25 00:43:26.673745] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.343 [2024-07-25 00:43:26.800586] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.721 00:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Lr1nlG26gX 00:17:05.721 00:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:05.721 00:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:05.721 00:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:17:05.721 00:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:05.721 00:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:05.721 00:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:05.721 00:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:17:05.721 00:17:05.721 real 0m7.164s 00:17:05.721 user 0m10.401s 00:17:05.721 sys 0m0.916s 00:17:05.721 00:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:05.721 ************************************ 00:17:05.721 END TEST raid_read_error_test 00:17:05.721 ************************************ 00:17:05.721 00:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.721 00:43:28 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:17:05.721 00:43:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:05.721 00:43:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.721 00:43:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.721 ************************************ 00:17:05.721 START TEST raid_write_error_test 00:17:05.721 ************************************ 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.u0DVgV9VJb 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=122892 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 122892 /var/tmp/spdk-raid.sock 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 122892 ']' 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:05.721 00:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:05.722 00:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:05.722 00:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.722 00:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.722 [2024-07-25 00:43:28.285888] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:17:05.722 [2024-07-25 00:43:28.286404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122892 ] 00:17:05.981 [2024-07-25 00:43:28.463030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.239 [2024-07-25 00:43:28.660121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.239 [2024-07-25 00:43:28.856050] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.805 00:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.805 00:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:06.805 00:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:06.805 00:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:06.805 BaseBdev1_malloc 00:17:06.805 00:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:07.063 true 00:17:07.063 00:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:07.321 [2024-07-25 00:43:29.803549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:07.321 [2024-07-25 00:43:29.803782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.321 [2024-07-25 00:43:29.803918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:07.321 [2024-07-25 00:43:29.804036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.321 [2024-07-25 00:43:29.806370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.321 [2024-07-25 00:43:29.806518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.321 BaseBdev1 00:17:07.321 00:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:07.321 00:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:07.578 BaseBdev2_malloc 00:17:07.578 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:07.836 true 00:17:07.836 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:08.095 [2024-07-25 00:43:30.491278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:08.095 [2024-07-25 00:43:30.491667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.095 [2024-07-25 00:43:30.491744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:08.095 [2024-07-25 00:43:30.491850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.095 [2024-07-25 00:43:30.494173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.095 [2024-07-25 00:43:30.494371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:08.095 BaseBdev2 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:08.095 [2024-07-25 00:43:30.683377] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.095 [2024-07-25 00:43:30.685531] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.095 [2024-07-25 00:43:30.685875] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:17:08.095 [2024-07-25 00:43:30.685987] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:08.095 [2024-07-25 00:43:30.686137] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:08.095 [2024-07-25 00:43:30.686575] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:17:08.095 [2024-07-25 00:43:30.686692] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:17:08.095 [2024-07-25 00:43:30.686936] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.095 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.366 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.366 "name": "raid_bdev1", 00:17:08.366 "uuid": "af3cd9df-336d-489e-be33-172220263203", 00:17:08.366 "strip_size_kb": 64, 00:17:08.366 "state": "online", 00:17:08.366 "raid_level": "raid0", 00:17:08.366 "superblock": true, 00:17:08.366 "num_base_bdevs": 2, 00:17:08.366 "num_base_bdevs_discovered": 2, 00:17:08.366 "num_base_bdevs_operational": 2, 00:17:08.366 "base_bdevs_list": [ 00:17:08.366 { 00:17:08.366 "name": "BaseBdev1", 00:17:08.366 "uuid": "c067723d-4043-52ca-9744-f5fd3848d259", 00:17:08.366 "is_configured": true, 00:17:08.366 "data_offset": 2048, 00:17:08.366 "data_size": 63488 00:17:08.366 }, 00:17:08.366 { 00:17:08.366 "name": "BaseBdev2", 00:17:08.366 "uuid": "dd8edd2c-dc22-5f47-9472-6268c62322db", 00:17:08.366 "is_configured": true, 00:17:08.366 "data_offset": 2048, 00:17:08.366 "data_size": 63488 00:17:08.366 } 00:17:08.366 ] 00:17:08.366 }' 00:17:08.366 00:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.366 00:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.959 00:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:08.959 00:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:08.959 [2024-07-25 00:43:31.528560] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:09.895 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.154 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.412 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.412 "name": "raid_bdev1", 00:17:10.412 "uuid": "af3cd9df-336d-489e-be33-172220263203", 00:17:10.412 "strip_size_kb": 64, 00:17:10.412 "state": "online", 00:17:10.412 "raid_level": "raid0", 00:17:10.412 "superblock": true, 00:17:10.412 "num_base_bdevs": 2, 00:17:10.412 "num_base_bdevs_discovered": 2, 00:17:10.412 "num_base_bdevs_operational": 2, 00:17:10.412 "base_bdevs_list": [ 00:17:10.412 { 00:17:10.412 "name": "BaseBdev1", 00:17:10.412 "uuid": "c067723d-4043-52ca-9744-f5fd3848d259", 00:17:10.412 "is_configured": true, 00:17:10.412 "data_offset": 2048, 00:17:10.412 "data_size": 63488 00:17:10.412 }, 00:17:10.412 { 00:17:10.412 "name": "BaseBdev2", 00:17:10.412 "uuid": "dd8edd2c-dc22-5f47-9472-6268c62322db", 00:17:10.412 "is_configured": true, 00:17:10.412 "data_offset": 2048, 00:17:10.412 "data_size": 63488 00:17:10.412 } 00:17:10.412 ] 00:17:10.412 }' 00:17:10.412 00:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.412 00:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.978 00:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:11.237 [2024-07-25 00:43:33.719864] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.237 [2024-07-25 00:43:33.720138] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.237 [2024-07-25 00:43:33.722833] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.237 [2024-07-25 00:43:33.722977] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.237 [2024-07-25 00:43:33.723038] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.237 [2024-07-25 00:43:33.723108] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:17:11.237 0 00:17:11.237 00:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 122892 00:17:11.237 00:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 122892 ']' 00:17:11.237 00:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 122892 00:17:11.237 00:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:11.237 00:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.237 00:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122892 00:17:11.237 00:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:11.237 00:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:11.237 00:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122892' 00:17:11.237 killing process with pid 122892 00:17:11.237 00:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 122892 00:17:11.237 [2024-07-25 00:43:33.782202] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.237 00:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 122892 00:17:11.496 [2024-07-25 00:43:33.906054] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.872 00:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.u0DVgV9VJb 00:17:12.872 00:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:12.872 00:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:12.872 00:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:17:12.872 00:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:12.872 00:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:12.872 00:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:12.873 00:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:17:12.873 00:17:12.873 real 0m7.033s 00:17:12.873 user 0m10.174s 00:17:12.873 sys 0m0.923s 00:17:12.873 00:43:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:12.873 ************************************ 00:17:12.873 END TEST raid_write_error_test 00:17:12.873 ************************************ 00:17:12.873 00:43:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.873 00:43:35 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:12.873 00:43:35 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:17:12.873 00:43:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:12.873 00:43:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:12.873 00:43:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.873 ************************************ 00:17:12.873 START TEST raid_state_function_test 00:17:12.873 ************************************ 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=123083 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123083' 00:17:12.873 Process raid pid: 123083 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 123083 /var/tmp/spdk-raid.sock 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 123083 ']' 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:12.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.873 00:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.873 [2024-07-25 00:43:35.379435] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:17:12.873 [2024-07-25 00:43:35.379834] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.132 [2024-07-25 00:43:35.560315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.132 [2024-07-25 00:43:35.764849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.391 [2024-07-25 00:43:35.970558] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.649 00:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.649 00:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:13.649 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:13.908 [2024-07-25 00:43:36.493804] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.908 [2024-07-25 00:43:36.494110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.908 [2024-07-25 00:43:36.494202] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.908 [2024-07-25 00:43:36.494279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.908 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.167 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.167 "name": "Existed_Raid", 00:17:14.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.167 "strip_size_kb": 64, 00:17:14.167 "state": "configuring", 00:17:14.167 "raid_level": "concat", 00:17:14.167 "superblock": false, 00:17:14.167 "num_base_bdevs": 2, 00:17:14.167 "num_base_bdevs_discovered": 0, 00:17:14.167 "num_base_bdevs_operational": 2, 00:17:14.167 "base_bdevs_list": [ 00:17:14.167 { 00:17:14.167 "name": "BaseBdev1", 00:17:14.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.167 "is_configured": false, 00:17:14.167 "data_offset": 0, 00:17:14.167 "data_size": 0 00:17:14.167 }, 00:17:14.167 { 00:17:14.167 "name": "BaseBdev2", 00:17:14.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.167 "is_configured": false, 00:17:14.167 "data_offset": 0, 00:17:14.167 "data_size": 0 00:17:14.167 } 00:17:14.167 ] 00:17:14.167 }' 00:17:14.167 00:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.167 00:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.734 00:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:14.994 [2024-07-25 00:43:37.465887] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:14.994 [2024-07-25 00:43:37.466143] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:14.994 00:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:14.994 [2024-07-25 00:43:37.641927] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:14.994 [2024-07-25 00:43:37.642193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:14.994 [2024-07-25 00:43:37.642296] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.994 [2024-07-25 00:43:37.642354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:15.253 00:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:15.513 [2024-07-25 00:43:37.914648] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:15.513 BaseBdev1 00:17:15.513 00:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:15.513 00:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:15.513 00:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:15.513 00:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:15.513 00:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:15.513 00:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:15.513 00:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:15.772 [ 00:17:15.772 { 00:17:15.772 "name": "BaseBdev1", 00:17:15.772 "aliases": [ 00:17:15.772 "721a280c-8587-4bfe-ac27-b56cf83ddf8b" 00:17:15.772 ], 00:17:15.772 "product_name": "Malloc disk", 00:17:15.772 "block_size": 512, 00:17:15.772 "num_blocks": 65536, 00:17:15.772 "uuid": "721a280c-8587-4bfe-ac27-b56cf83ddf8b", 00:17:15.772 "assigned_rate_limits": { 00:17:15.772 "rw_ios_per_sec": 0, 00:17:15.772 "rw_mbytes_per_sec": 0, 00:17:15.772 "r_mbytes_per_sec": 0, 00:17:15.772 "w_mbytes_per_sec": 0 00:17:15.772 }, 00:17:15.772 "claimed": true, 00:17:15.772 "claim_type": "exclusive_write", 00:17:15.772 "zoned": false, 00:17:15.772 "supported_io_types": { 00:17:15.772 "read": true, 00:17:15.772 "write": true, 00:17:15.772 "unmap": true, 00:17:15.772 "flush": true, 00:17:15.772 "reset": true, 00:17:15.772 "nvme_admin": false, 00:17:15.772 "nvme_io": false, 00:17:15.772 "nvme_io_md": false, 00:17:15.772 "write_zeroes": true, 00:17:15.772 "zcopy": true, 00:17:15.772 "get_zone_info": false, 00:17:15.772 "zone_management": false, 00:17:15.772 "zone_append": false, 00:17:15.772 "compare": false, 00:17:15.772 "compare_and_write": false, 00:17:15.772 "abort": true, 00:17:15.772 "seek_hole": false, 00:17:15.772 "seek_data": false, 00:17:15.772 "copy": true, 00:17:15.772 "nvme_iov_md": false 00:17:15.772 }, 00:17:15.772 "memory_domains": [ 00:17:15.772 { 00:17:15.772 "dma_device_id": "system", 00:17:15.772 "dma_device_type": 1 00:17:15.772 }, 00:17:15.772 { 00:17:15.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.772 "dma_device_type": 2 00:17:15.772 } 00:17:15.772 ], 00:17:15.772 "driver_specific": {} 00:17:15.772 } 00:17:15.772 ] 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.772 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.032 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.032 "name": "Existed_Raid", 00:17:16.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.032 "strip_size_kb": 64, 00:17:16.032 "state": "configuring", 00:17:16.032 "raid_level": "concat", 00:17:16.032 "superblock": false, 00:17:16.032 "num_base_bdevs": 2, 00:17:16.032 "num_base_bdevs_discovered": 1, 00:17:16.032 "num_base_bdevs_operational": 2, 00:17:16.032 "base_bdevs_list": [ 00:17:16.032 { 00:17:16.032 "name": "BaseBdev1", 00:17:16.032 "uuid": "721a280c-8587-4bfe-ac27-b56cf83ddf8b", 00:17:16.032 "is_configured": true, 00:17:16.032 "data_offset": 0, 00:17:16.032 "data_size": 65536 00:17:16.032 }, 00:17:16.032 { 00:17:16.032 "name": "BaseBdev2", 00:17:16.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.032 "is_configured": false, 00:17:16.032 "data_offset": 0, 00:17:16.032 "data_size": 0 00:17:16.032 } 00:17:16.032 ] 00:17:16.032 }' 00:17:16.032 00:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.032 00:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.601 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:16.860 [2024-07-25 00:43:39.426979] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:16.860 [2024-07-25 00:43:39.427254] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:16.860 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:17.120 [2024-07-25 00:43:39.655031] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:17.120 [2024-07-25 00:43:39.657230] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:17.120 [2024-07-25 00:43:39.657406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.120 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.382 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.382 "name": "Existed_Raid", 00:17:17.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.382 "strip_size_kb": 64, 00:17:17.382 "state": "configuring", 00:17:17.382 "raid_level": "concat", 00:17:17.382 "superblock": false, 00:17:17.382 "num_base_bdevs": 2, 00:17:17.382 "num_base_bdevs_discovered": 1, 00:17:17.382 "num_base_bdevs_operational": 2, 00:17:17.382 "base_bdevs_list": [ 00:17:17.382 { 00:17:17.382 "name": "BaseBdev1", 00:17:17.382 "uuid": "721a280c-8587-4bfe-ac27-b56cf83ddf8b", 00:17:17.382 "is_configured": true, 00:17:17.382 "data_offset": 0, 00:17:17.382 "data_size": 65536 00:17:17.382 }, 00:17:17.382 { 00:17:17.382 "name": "BaseBdev2", 00:17:17.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.382 "is_configured": false, 00:17:17.382 "data_offset": 0, 00:17:17.382 "data_size": 0 00:17:17.382 } 00:17:17.382 ] 00:17:17.382 }' 00:17:17.382 00:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.382 00:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.951 00:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:17.951 [2024-07-25 00:43:40.603270] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.209 [2024-07-25 00:43:40.603517] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:18.209 [2024-07-25 00:43:40.603558] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:18.209 [2024-07-25 00:43:40.603754] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:18.209 [2024-07-25 00:43:40.604151] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:18.209 [2024-07-25 00:43:40.604261] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:18.209 [2024-07-25 00:43:40.604603] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.209 BaseBdev2 00:17:18.209 00:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:18.209 00:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:18.209 00:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:18.209 00:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:18.209 00:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:18.209 00:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:18.209 00:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.209 00:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:18.468 [ 00:17:18.468 { 00:17:18.468 "name": "BaseBdev2", 00:17:18.468 "aliases": [ 00:17:18.468 "ad0fc357-2f17-47ea-821c-268797fa665b" 00:17:18.468 ], 00:17:18.468 "product_name": "Malloc disk", 00:17:18.468 "block_size": 512, 00:17:18.468 "num_blocks": 65536, 00:17:18.468 "uuid": "ad0fc357-2f17-47ea-821c-268797fa665b", 00:17:18.468 "assigned_rate_limits": { 00:17:18.468 "rw_ios_per_sec": 0, 00:17:18.468 "rw_mbytes_per_sec": 0, 00:17:18.468 "r_mbytes_per_sec": 0, 00:17:18.468 "w_mbytes_per_sec": 0 00:17:18.468 }, 00:17:18.468 "claimed": true, 00:17:18.468 "claim_type": "exclusive_write", 00:17:18.468 "zoned": false, 00:17:18.468 "supported_io_types": { 00:17:18.468 "read": true, 00:17:18.468 "write": true, 00:17:18.468 "unmap": true, 00:17:18.468 "flush": true, 00:17:18.468 "reset": true, 00:17:18.468 "nvme_admin": false, 00:17:18.468 "nvme_io": false, 00:17:18.468 "nvme_io_md": false, 00:17:18.468 "write_zeroes": true, 00:17:18.468 "zcopy": true, 00:17:18.468 "get_zone_info": false, 00:17:18.468 "zone_management": false, 00:17:18.468 "zone_append": false, 00:17:18.468 "compare": false, 00:17:18.468 "compare_and_write": false, 00:17:18.468 "abort": true, 00:17:18.468 "seek_hole": false, 00:17:18.468 "seek_data": false, 00:17:18.468 "copy": true, 00:17:18.468 "nvme_iov_md": false 00:17:18.468 }, 00:17:18.468 "memory_domains": [ 00:17:18.468 { 00:17:18.468 "dma_device_id": "system", 00:17:18.468 "dma_device_type": 1 00:17:18.468 }, 00:17:18.468 { 00:17:18.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.468 "dma_device_type": 2 00:17:18.468 } 00:17:18.468 ], 00:17:18.468 "driver_specific": {} 00:17:18.468 } 00:17:18.468 ] 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.468 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.726 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.726 "name": "Existed_Raid", 00:17:18.726 "uuid": "489d0031-a09e-422c-924c-5fb53362fd8e", 00:17:18.726 "strip_size_kb": 64, 00:17:18.726 "state": "online", 00:17:18.726 "raid_level": "concat", 00:17:18.727 "superblock": false, 00:17:18.727 "num_base_bdevs": 2, 00:17:18.727 "num_base_bdevs_discovered": 2, 00:17:18.727 "num_base_bdevs_operational": 2, 00:17:18.727 "base_bdevs_list": [ 00:17:18.727 { 00:17:18.727 "name": "BaseBdev1", 00:17:18.727 "uuid": "721a280c-8587-4bfe-ac27-b56cf83ddf8b", 00:17:18.727 "is_configured": true, 00:17:18.727 "data_offset": 0, 00:17:18.727 "data_size": 65536 00:17:18.727 }, 00:17:18.727 { 00:17:18.727 "name": "BaseBdev2", 00:17:18.727 "uuid": "ad0fc357-2f17-47ea-821c-268797fa665b", 00:17:18.727 "is_configured": true, 00:17:18.727 "data_offset": 0, 00:17:18.727 "data_size": 65536 00:17:18.727 } 00:17:18.727 ] 00:17:18.727 }' 00:17:18.727 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.727 00:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.293 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:19.293 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:19.293 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:19.293 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:19.293 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:19.293 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:19.293 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:19.293 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:19.551 [2024-07-25 00:43:41.951758] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.551 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:19.551 "name": "Existed_Raid", 00:17:19.551 "aliases": [ 00:17:19.551 "489d0031-a09e-422c-924c-5fb53362fd8e" 00:17:19.551 ], 00:17:19.551 "product_name": "Raid Volume", 00:17:19.551 "block_size": 512, 00:17:19.551 "num_blocks": 131072, 00:17:19.551 "uuid": "489d0031-a09e-422c-924c-5fb53362fd8e", 00:17:19.551 "assigned_rate_limits": { 00:17:19.551 "rw_ios_per_sec": 0, 00:17:19.551 "rw_mbytes_per_sec": 0, 00:17:19.551 "r_mbytes_per_sec": 0, 00:17:19.551 "w_mbytes_per_sec": 0 00:17:19.551 }, 00:17:19.551 "claimed": false, 00:17:19.551 "zoned": false, 00:17:19.551 "supported_io_types": { 00:17:19.551 "read": true, 00:17:19.551 "write": true, 00:17:19.551 "unmap": true, 00:17:19.551 "flush": true, 00:17:19.551 "reset": true, 00:17:19.551 "nvme_admin": false, 00:17:19.551 "nvme_io": false, 00:17:19.551 "nvme_io_md": false, 00:17:19.551 "write_zeroes": true, 00:17:19.551 "zcopy": false, 00:17:19.552 "get_zone_info": false, 00:17:19.552 "zone_management": false, 00:17:19.552 "zone_append": false, 00:17:19.552 "compare": false, 00:17:19.552 "compare_and_write": false, 00:17:19.552 "abort": false, 00:17:19.552 "seek_hole": false, 00:17:19.552 "seek_data": false, 00:17:19.552 "copy": false, 00:17:19.552 "nvme_iov_md": false 00:17:19.552 }, 00:17:19.552 "memory_domains": [ 00:17:19.552 { 00:17:19.552 "dma_device_id": "system", 00:17:19.552 "dma_device_type": 1 00:17:19.552 }, 00:17:19.552 { 00:17:19.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.552 "dma_device_type": 2 00:17:19.552 }, 00:17:19.552 { 00:17:19.552 "dma_device_id": "system", 00:17:19.552 "dma_device_type": 1 00:17:19.552 }, 00:17:19.552 { 00:17:19.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.552 "dma_device_type": 2 00:17:19.552 } 00:17:19.552 ], 00:17:19.552 "driver_specific": { 00:17:19.552 "raid": { 00:17:19.552 "uuid": "489d0031-a09e-422c-924c-5fb53362fd8e", 00:17:19.552 "strip_size_kb": 64, 00:17:19.552 "state": "online", 00:17:19.552 "raid_level": "concat", 00:17:19.552 "superblock": false, 00:17:19.552 "num_base_bdevs": 2, 00:17:19.552 "num_base_bdevs_discovered": 2, 00:17:19.552 "num_base_bdevs_operational": 2, 00:17:19.552 "base_bdevs_list": [ 00:17:19.552 { 00:17:19.552 "name": "BaseBdev1", 00:17:19.552 "uuid": "721a280c-8587-4bfe-ac27-b56cf83ddf8b", 00:17:19.552 "is_configured": true, 00:17:19.552 "data_offset": 0, 00:17:19.552 "data_size": 65536 00:17:19.552 }, 00:17:19.552 { 00:17:19.552 "name": "BaseBdev2", 00:17:19.552 "uuid": "ad0fc357-2f17-47ea-821c-268797fa665b", 00:17:19.552 "is_configured": true, 00:17:19.552 "data_offset": 0, 00:17:19.552 "data_size": 65536 00:17:19.552 } 00:17:19.552 ] 00:17:19.552 } 00:17:19.552 } 00:17:19.552 }' 00:17:19.552 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.552 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:19.552 BaseBdev2' 00:17:19.552 00:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:19.552 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:19.552 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:19.810 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:19.811 "name": "BaseBdev1", 00:17:19.811 "aliases": [ 00:17:19.811 "721a280c-8587-4bfe-ac27-b56cf83ddf8b" 00:17:19.811 ], 00:17:19.811 "product_name": "Malloc disk", 00:17:19.811 "block_size": 512, 00:17:19.811 "num_blocks": 65536, 00:17:19.811 "uuid": "721a280c-8587-4bfe-ac27-b56cf83ddf8b", 00:17:19.811 "assigned_rate_limits": { 00:17:19.811 "rw_ios_per_sec": 0, 00:17:19.811 "rw_mbytes_per_sec": 0, 00:17:19.811 "r_mbytes_per_sec": 0, 00:17:19.811 "w_mbytes_per_sec": 0 00:17:19.811 }, 00:17:19.811 "claimed": true, 00:17:19.811 "claim_type": "exclusive_write", 00:17:19.811 "zoned": false, 00:17:19.811 "supported_io_types": { 00:17:19.811 "read": true, 00:17:19.811 "write": true, 00:17:19.811 "unmap": true, 00:17:19.811 "flush": true, 00:17:19.811 "reset": true, 00:17:19.811 "nvme_admin": false, 00:17:19.811 "nvme_io": false, 00:17:19.811 "nvme_io_md": false, 00:17:19.811 "write_zeroes": true, 00:17:19.811 "zcopy": true, 00:17:19.811 "get_zone_info": false, 00:17:19.811 "zone_management": false, 00:17:19.811 "zone_append": false, 00:17:19.811 "compare": false, 00:17:19.811 "compare_and_write": false, 00:17:19.811 "abort": true, 00:17:19.811 "seek_hole": false, 00:17:19.811 "seek_data": false, 00:17:19.811 "copy": true, 00:17:19.811 "nvme_iov_md": false 00:17:19.811 }, 00:17:19.811 "memory_domains": [ 00:17:19.811 { 00:17:19.811 "dma_device_id": "system", 00:17:19.811 "dma_device_type": 1 00:17:19.811 }, 00:17:19.811 { 00:17:19.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.811 "dma_device_type": 2 00:17:19.811 } 00:17:19.811 ], 00:17:19.811 "driver_specific": {} 00:17:19.811 }' 00:17:19.811 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:19.811 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:19.811 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:19.811 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:19.811 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:19.811 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:19.811 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.811 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.069 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:20.069 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.069 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.069 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:20.069 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:20.069 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:20.069 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:20.328 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:20.328 "name": "BaseBdev2", 00:17:20.328 "aliases": [ 00:17:20.328 "ad0fc357-2f17-47ea-821c-268797fa665b" 00:17:20.328 ], 00:17:20.328 "product_name": "Malloc disk", 00:17:20.328 "block_size": 512, 00:17:20.328 "num_blocks": 65536, 00:17:20.328 "uuid": "ad0fc357-2f17-47ea-821c-268797fa665b", 00:17:20.328 "assigned_rate_limits": { 00:17:20.328 "rw_ios_per_sec": 0, 00:17:20.328 "rw_mbytes_per_sec": 0, 00:17:20.328 "r_mbytes_per_sec": 0, 00:17:20.328 "w_mbytes_per_sec": 0 00:17:20.328 }, 00:17:20.328 "claimed": true, 00:17:20.328 "claim_type": "exclusive_write", 00:17:20.328 "zoned": false, 00:17:20.328 "supported_io_types": { 00:17:20.328 "read": true, 00:17:20.328 "write": true, 00:17:20.328 "unmap": true, 00:17:20.328 "flush": true, 00:17:20.328 "reset": true, 00:17:20.328 "nvme_admin": false, 00:17:20.328 "nvme_io": false, 00:17:20.328 "nvme_io_md": false, 00:17:20.328 "write_zeroes": true, 00:17:20.328 "zcopy": true, 00:17:20.328 "get_zone_info": false, 00:17:20.328 "zone_management": false, 00:17:20.328 "zone_append": false, 00:17:20.328 "compare": false, 00:17:20.328 "compare_and_write": false, 00:17:20.328 "abort": true, 00:17:20.328 "seek_hole": false, 00:17:20.328 "seek_data": false, 00:17:20.328 "copy": true, 00:17:20.328 "nvme_iov_md": false 00:17:20.328 }, 00:17:20.328 "memory_domains": [ 00:17:20.328 { 00:17:20.328 "dma_device_id": "system", 00:17:20.328 "dma_device_type": 1 00:17:20.328 }, 00:17:20.328 { 00:17:20.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.328 "dma_device_type": 2 00:17:20.328 } 00:17:20.328 ], 00:17:20.328 "driver_specific": {} 00:17:20.328 }' 00:17:20.328 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.328 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.328 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:20.328 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.328 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.587 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:20.587 00:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.587 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.587 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:20.587 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.587 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.587 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:20.587 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:20.846 [2024-07-25 00:43:43.447911] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:20.846 [2024-07-25 00:43:43.448066] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.846 [2024-07-25 00:43:43.448263] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.103 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.362 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.362 "name": "Existed_Raid", 00:17:21.362 "uuid": "489d0031-a09e-422c-924c-5fb53362fd8e", 00:17:21.362 "strip_size_kb": 64, 00:17:21.362 "state": "offline", 00:17:21.362 "raid_level": "concat", 00:17:21.362 "superblock": false, 00:17:21.362 "num_base_bdevs": 2, 00:17:21.362 "num_base_bdevs_discovered": 1, 00:17:21.362 "num_base_bdevs_operational": 1, 00:17:21.362 "base_bdevs_list": [ 00:17:21.362 { 00:17:21.362 "name": null, 00:17:21.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.362 "is_configured": false, 00:17:21.362 "data_offset": 0, 00:17:21.362 "data_size": 65536 00:17:21.362 }, 00:17:21.362 { 00:17:21.362 "name": "BaseBdev2", 00:17:21.362 "uuid": "ad0fc357-2f17-47ea-821c-268797fa665b", 00:17:21.362 "is_configured": true, 00:17:21.362 "data_offset": 0, 00:17:21.362 "data_size": 65536 00:17:21.362 } 00:17:21.362 ] 00:17:21.362 }' 00:17:21.362 00:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.362 00:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.928 00:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:21.928 00:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:21.928 00:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.928 00:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:21.928 00:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:21.928 00:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:21.928 00:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:22.187 [2024-07-25 00:43:44.713569] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:22.187 [2024-07-25 00:43:44.713826] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:22.187 00:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:22.187 00:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:22.445 00:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.445 00:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:22.445 00:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:22.445 00:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:22.445 00:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:22.445 00:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 123083 00:17:22.445 00:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 123083 ']' 00:17:22.445 00:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 123083 00:17:22.704 00:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:22.704 00:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.704 00:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123083 00:17:22.704 00:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:22.704 00:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:22.704 00:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123083' 00:17:22.704 killing process with pid 123083 00:17:22.704 00:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 123083 00:17:22.704 00:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 123083 00:17:22.704 [2024-07-25 00:43:45.123140] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.704 [2024-07-25 00:43:45.123263] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.080 ************************************ 00:17:24.080 END TEST raid_state_function_test 00:17:24.080 ************************************ 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:24.080 00:17:24.080 real 0m11.200s 00:17:24.080 user 0m18.879s 00:17:24.080 sys 0m1.690s 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.080 00:43:46 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:17:24.080 00:43:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:24.080 00:43:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.080 00:43:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.080 ************************************ 00:17:24.080 START TEST raid_state_function_test_sb 00:17:24.080 ************************************ 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=123456 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123456' 00:17:24.080 Process raid pid: 123456 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 123456 /var/tmp/spdk-raid.sock 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 123456 ']' 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.080 00:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.080 [2024-07-25 00:43:46.660258] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:17:24.080 [2024-07-25 00:43:46.660747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.339 [2024-07-25 00:43:46.841152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.598 [2024-07-25 00:43:47.043550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.856 [2024-07-25 00:43:47.251164] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.115 00:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.115 00:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:25.115 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:25.374 [2024-07-25 00:43:47.768082] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:25.374 [2024-07-25 00:43:47.768425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:25.374 [2024-07-25 00:43:47.768525] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.374 [2024-07-25 00:43:47.768585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.374 00:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.633 00:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.633 "name": "Existed_Raid", 00:17:25.633 "uuid": "6928150a-7562-43ae-8e96-25ca56abcc5c", 00:17:25.633 "strip_size_kb": 64, 00:17:25.633 "state": "configuring", 00:17:25.633 "raid_level": "concat", 00:17:25.633 "superblock": true, 00:17:25.633 "num_base_bdevs": 2, 00:17:25.633 "num_base_bdevs_discovered": 0, 00:17:25.633 "num_base_bdevs_operational": 2, 00:17:25.633 "base_bdevs_list": [ 00:17:25.633 { 00:17:25.633 "name": "BaseBdev1", 00:17:25.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.633 "is_configured": false, 00:17:25.633 "data_offset": 0, 00:17:25.633 "data_size": 0 00:17:25.633 }, 00:17:25.633 { 00:17:25.633 "name": "BaseBdev2", 00:17:25.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.633 "is_configured": false, 00:17:25.633 "data_offset": 0, 00:17:25.633 "data_size": 0 00:17:25.633 } 00:17:25.633 ] 00:17:25.633 }' 00:17:25.633 00:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.633 00:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.211 00:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:26.484 [2024-07-25 00:43:48.868170] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.484 [2024-07-25 00:43:48.868413] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:26.484 00:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:26.484 [2024-07-25 00:43:49.060239] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:26.484 [2024-07-25 00:43:49.060523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:26.484 [2024-07-25 00:43:49.060613] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.484 [2024-07-25 00:43:49.060667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.484 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:26.742 [2024-07-25 00:43:49.272985] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.742 BaseBdev1 00:17:26.742 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:26.742 00:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:26.742 00:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:26.742 00:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:26.742 00:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:26.742 00:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:26.742 00:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.001 00:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:27.001 [ 00:17:27.001 { 00:17:27.001 "name": "BaseBdev1", 00:17:27.001 "aliases": [ 00:17:27.001 "55b4f8ed-f6d8-4285-93a0-3464980343cd" 00:17:27.001 ], 00:17:27.001 "product_name": "Malloc disk", 00:17:27.001 "block_size": 512, 00:17:27.001 "num_blocks": 65536, 00:17:27.001 "uuid": "55b4f8ed-f6d8-4285-93a0-3464980343cd", 00:17:27.001 "assigned_rate_limits": { 00:17:27.001 "rw_ios_per_sec": 0, 00:17:27.001 "rw_mbytes_per_sec": 0, 00:17:27.001 "r_mbytes_per_sec": 0, 00:17:27.001 "w_mbytes_per_sec": 0 00:17:27.001 }, 00:17:27.001 "claimed": true, 00:17:27.001 "claim_type": "exclusive_write", 00:17:27.001 "zoned": false, 00:17:27.001 "supported_io_types": { 00:17:27.001 "read": true, 00:17:27.001 "write": true, 00:17:27.001 "unmap": true, 00:17:27.001 "flush": true, 00:17:27.001 "reset": true, 00:17:27.001 "nvme_admin": false, 00:17:27.001 "nvme_io": false, 00:17:27.001 "nvme_io_md": false, 00:17:27.001 "write_zeroes": true, 00:17:27.001 "zcopy": true, 00:17:27.001 "get_zone_info": false, 00:17:27.001 "zone_management": false, 00:17:27.001 "zone_append": false, 00:17:27.001 "compare": false, 00:17:27.001 "compare_and_write": false, 00:17:27.001 "abort": true, 00:17:27.001 "seek_hole": false, 00:17:27.001 "seek_data": false, 00:17:27.001 "copy": true, 00:17:27.001 "nvme_iov_md": false 00:17:27.001 }, 00:17:27.001 "memory_domains": [ 00:17:27.001 { 00:17:27.001 "dma_device_id": "system", 00:17:27.001 "dma_device_type": 1 00:17:27.001 }, 00:17:27.001 { 00:17:27.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.001 "dma_device_type": 2 00:17:27.001 } 00:17:27.001 ], 00:17:27.001 "driver_specific": {} 00:17:27.001 } 00:17:27.001 ] 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.259 "name": "Existed_Raid", 00:17:27.259 "uuid": "213c6cc6-93e0-4582-98bd-f063244c6274", 00:17:27.259 "strip_size_kb": 64, 00:17:27.259 "state": "configuring", 00:17:27.259 "raid_level": "concat", 00:17:27.259 "superblock": true, 00:17:27.259 "num_base_bdevs": 2, 00:17:27.259 "num_base_bdevs_discovered": 1, 00:17:27.259 "num_base_bdevs_operational": 2, 00:17:27.259 "base_bdevs_list": [ 00:17:27.259 { 00:17:27.259 "name": "BaseBdev1", 00:17:27.259 "uuid": "55b4f8ed-f6d8-4285-93a0-3464980343cd", 00:17:27.259 "is_configured": true, 00:17:27.259 "data_offset": 2048, 00:17:27.259 "data_size": 63488 00:17:27.259 }, 00:17:27.259 { 00:17:27.259 "name": "BaseBdev2", 00:17:27.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.259 "is_configured": false, 00:17:27.259 "data_offset": 0, 00:17:27.259 "data_size": 0 00:17:27.259 } 00:17:27.259 ] 00:17:27.259 }' 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.259 00:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.193 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:28.193 [2024-07-25 00:43:50.749280] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.193 [2024-07-25 00:43:50.749537] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:28.193 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:28.452 [2024-07-25 00:43:50.933359] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.452 [2024-07-25 00:43:50.935541] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.452 [2024-07-25 00:43:50.935715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.452 00:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.710 00:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.710 "name": "Existed_Raid", 00:17:28.710 "uuid": "d1f93dcc-dfa2-416e-9cee-789d21064856", 00:17:28.710 "strip_size_kb": 64, 00:17:28.710 "state": "configuring", 00:17:28.710 "raid_level": "concat", 00:17:28.710 "superblock": true, 00:17:28.710 "num_base_bdevs": 2, 00:17:28.710 "num_base_bdevs_discovered": 1, 00:17:28.710 "num_base_bdevs_operational": 2, 00:17:28.710 "base_bdevs_list": [ 00:17:28.710 { 00:17:28.710 "name": "BaseBdev1", 00:17:28.710 "uuid": "55b4f8ed-f6d8-4285-93a0-3464980343cd", 00:17:28.710 "is_configured": true, 00:17:28.710 "data_offset": 2048, 00:17:28.710 "data_size": 63488 00:17:28.710 }, 00:17:28.710 { 00:17:28.710 "name": "BaseBdev2", 00:17:28.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.710 "is_configured": false, 00:17:28.710 "data_offset": 0, 00:17:28.710 "data_size": 0 00:17:28.710 } 00:17:28.710 ] 00:17:28.710 }' 00:17:28.710 00:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.710 00:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.276 00:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:29.276 [2024-07-25 00:43:51.900018] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.276 [2024-07-25 00:43:51.900521] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:29.276 [2024-07-25 00:43:51.900658] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:29.276 [2024-07-25 00:43:51.900821] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:29.276 [2024-07-25 00:43:51.901201] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:29.276 [2024-07-25 00:43:51.901245] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:29.276 [2024-07-25 00:43:51.901470] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.276 BaseBdev2 00:17:29.276 00:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:29.276 00:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:29.276 00:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:29.276 00:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:29.276 00:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:29.276 00:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:29.276 00:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.534 00:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:29.791 [ 00:17:29.791 { 00:17:29.791 "name": "BaseBdev2", 00:17:29.791 "aliases": [ 00:17:29.791 "36547428-b917-4206-89bd-46d58f665b7e" 00:17:29.791 ], 00:17:29.791 "product_name": "Malloc disk", 00:17:29.791 "block_size": 512, 00:17:29.791 "num_blocks": 65536, 00:17:29.791 "uuid": "36547428-b917-4206-89bd-46d58f665b7e", 00:17:29.791 "assigned_rate_limits": { 00:17:29.791 "rw_ios_per_sec": 0, 00:17:29.791 "rw_mbytes_per_sec": 0, 00:17:29.791 "r_mbytes_per_sec": 0, 00:17:29.791 "w_mbytes_per_sec": 0 00:17:29.791 }, 00:17:29.791 "claimed": true, 00:17:29.791 "claim_type": "exclusive_write", 00:17:29.791 "zoned": false, 00:17:29.791 "supported_io_types": { 00:17:29.791 "read": true, 00:17:29.791 "write": true, 00:17:29.791 "unmap": true, 00:17:29.791 "flush": true, 00:17:29.791 "reset": true, 00:17:29.791 "nvme_admin": false, 00:17:29.791 "nvme_io": false, 00:17:29.791 "nvme_io_md": false, 00:17:29.791 "write_zeroes": true, 00:17:29.791 "zcopy": true, 00:17:29.791 "get_zone_info": false, 00:17:29.791 "zone_management": false, 00:17:29.791 "zone_append": false, 00:17:29.791 "compare": false, 00:17:29.791 "compare_and_write": false, 00:17:29.791 "abort": true, 00:17:29.791 "seek_hole": false, 00:17:29.791 "seek_data": false, 00:17:29.791 "copy": true, 00:17:29.791 "nvme_iov_md": false 00:17:29.792 }, 00:17:29.792 "memory_domains": [ 00:17:29.792 { 00:17:29.792 "dma_device_id": "system", 00:17:29.792 "dma_device_type": 1 00:17:29.792 }, 00:17:29.792 { 00:17:29.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.792 "dma_device_type": 2 00:17:29.792 } 00:17:29.792 ], 00:17:29.792 "driver_specific": {} 00:17:29.792 } 00:17:29.792 ] 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.792 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.050 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.050 "name": "Existed_Raid", 00:17:30.050 "uuid": "d1f93dcc-dfa2-416e-9cee-789d21064856", 00:17:30.050 "strip_size_kb": 64, 00:17:30.050 "state": "online", 00:17:30.050 "raid_level": "concat", 00:17:30.050 "superblock": true, 00:17:30.050 "num_base_bdevs": 2, 00:17:30.050 "num_base_bdevs_discovered": 2, 00:17:30.050 "num_base_bdevs_operational": 2, 00:17:30.050 "base_bdevs_list": [ 00:17:30.050 { 00:17:30.050 "name": "BaseBdev1", 00:17:30.050 "uuid": "55b4f8ed-f6d8-4285-93a0-3464980343cd", 00:17:30.050 "is_configured": true, 00:17:30.050 "data_offset": 2048, 00:17:30.050 "data_size": 63488 00:17:30.050 }, 00:17:30.050 { 00:17:30.050 "name": "BaseBdev2", 00:17:30.050 "uuid": "36547428-b917-4206-89bd-46d58f665b7e", 00:17:30.050 "is_configured": true, 00:17:30.050 "data_offset": 2048, 00:17:30.050 "data_size": 63488 00:17:30.050 } 00:17:30.050 ] 00:17:30.050 }' 00:17:30.050 00:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.050 00:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.617 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:30.617 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:30.617 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:30.617 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:30.617 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:30.617 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:30.617 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:30.617 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:30.874 [2024-07-25 00:43:53.324531] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.874 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:30.874 "name": "Existed_Raid", 00:17:30.874 "aliases": [ 00:17:30.874 "d1f93dcc-dfa2-416e-9cee-789d21064856" 00:17:30.874 ], 00:17:30.874 "product_name": "Raid Volume", 00:17:30.874 "block_size": 512, 00:17:30.874 "num_blocks": 126976, 00:17:30.874 "uuid": "d1f93dcc-dfa2-416e-9cee-789d21064856", 00:17:30.874 "assigned_rate_limits": { 00:17:30.874 "rw_ios_per_sec": 0, 00:17:30.874 "rw_mbytes_per_sec": 0, 00:17:30.874 "r_mbytes_per_sec": 0, 00:17:30.874 "w_mbytes_per_sec": 0 00:17:30.874 }, 00:17:30.874 "claimed": false, 00:17:30.874 "zoned": false, 00:17:30.874 "supported_io_types": { 00:17:30.874 "read": true, 00:17:30.874 "write": true, 00:17:30.874 "unmap": true, 00:17:30.874 "flush": true, 00:17:30.874 "reset": true, 00:17:30.874 "nvme_admin": false, 00:17:30.874 "nvme_io": false, 00:17:30.874 "nvme_io_md": false, 00:17:30.874 "write_zeroes": true, 00:17:30.874 "zcopy": false, 00:17:30.874 "get_zone_info": false, 00:17:30.874 "zone_management": false, 00:17:30.874 "zone_append": false, 00:17:30.874 "compare": false, 00:17:30.874 "compare_and_write": false, 00:17:30.874 "abort": false, 00:17:30.874 "seek_hole": false, 00:17:30.874 "seek_data": false, 00:17:30.874 "copy": false, 00:17:30.874 "nvme_iov_md": false 00:17:30.874 }, 00:17:30.874 "memory_domains": [ 00:17:30.874 { 00:17:30.874 "dma_device_id": "system", 00:17:30.874 "dma_device_type": 1 00:17:30.874 }, 00:17:30.874 { 00:17:30.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.874 "dma_device_type": 2 00:17:30.874 }, 00:17:30.874 { 00:17:30.874 "dma_device_id": "system", 00:17:30.874 "dma_device_type": 1 00:17:30.874 }, 00:17:30.874 { 00:17:30.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.874 "dma_device_type": 2 00:17:30.874 } 00:17:30.874 ], 00:17:30.874 "driver_specific": { 00:17:30.874 "raid": { 00:17:30.874 "uuid": "d1f93dcc-dfa2-416e-9cee-789d21064856", 00:17:30.874 "strip_size_kb": 64, 00:17:30.874 "state": "online", 00:17:30.874 "raid_level": "concat", 00:17:30.874 "superblock": true, 00:17:30.874 "num_base_bdevs": 2, 00:17:30.874 "num_base_bdevs_discovered": 2, 00:17:30.874 "num_base_bdevs_operational": 2, 00:17:30.874 "base_bdevs_list": [ 00:17:30.874 { 00:17:30.874 "name": "BaseBdev1", 00:17:30.874 "uuid": "55b4f8ed-f6d8-4285-93a0-3464980343cd", 00:17:30.874 "is_configured": true, 00:17:30.874 "data_offset": 2048, 00:17:30.874 "data_size": 63488 00:17:30.874 }, 00:17:30.874 { 00:17:30.874 "name": "BaseBdev2", 00:17:30.874 "uuid": "36547428-b917-4206-89bd-46d58f665b7e", 00:17:30.874 "is_configured": true, 00:17:30.874 "data_offset": 2048, 00:17:30.874 "data_size": 63488 00:17:30.874 } 00:17:30.874 ] 00:17:30.874 } 00:17:30.874 } 00:17:30.874 }' 00:17:30.874 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.874 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:30.874 BaseBdev2' 00:17:30.874 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:30.874 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:30.874 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:31.133 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:31.133 "name": "BaseBdev1", 00:17:31.133 "aliases": [ 00:17:31.133 "55b4f8ed-f6d8-4285-93a0-3464980343cd" 00:17:31.133 ], 00:17:31.133 "product_name": "Malloc disk", 00:17:31.133 "block_size": 512, 00:17:31.133 "num_blocks": 65536, 00:17:31.133 "uuid": "55b4f8ed-f6d8-4285-93a0-3464980343cd", 00:17:31.133 "assigned_rate_limits": { 00:17:31.133 "rw_ios_per_sec": 0, 00:17:31.133 "rw_mbytes_per_sec": 0, 00:17:31.133 "r_mbytes_per_sec": 0, 00:17:31.133 "w_mbytes_per_sec": 0 00:17:31.133 }, 00:17:31.133 "claimed": true, 00:17:31.133 "claim_type": "exclusive_write", 00:17:31.133 "zoned": false, 00:17:31.133 "supported_io_types": { 00:17:31.133 "read": true, 00:17:31.133 "write": true, 00:17:31.133 "unmap": true, 00:17:31.133 "flush": true, 00:17:31.133 "reset": true, 00:17:31.133 "nvme_admin": false, 00:17:31.133 "nvme_io": false, 00:17:31.133 "nvme_io_md": false, 00:17:31.133 "write_zeroes": true, 00:17:31.133 "zcopy": true, 00:17:31.133 "get_zone_info": false, 00:17:31.133 "zone_management": false, 00:17:31.133 "zone_append": false, 00:17:31.133 "compare": false, 00:17:31.133 "compare_and_write": false, 00:17:31.133 "abort": true, 00:17:31.133 "seek_hole": false, 00:17:31.133 "seek_data": false, 00:17:31.133 "copy": true, 00:17:31.133 "nvme_iov_md": false 00:17:31.133 }, 00:17:31.133 "memory_domains": [ 00:17:31.133 { 00:17:31.133 "dma_device_id": "system", 00:17:31.133 "dma_device_type": 1 00:17:31.133 }, 00:17:31.133 { 00:17:31.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.133 "dma_device_type": 2 00:17:31.133 } 00:17:31.133 ], 00:17:31.133 "driver_specific": {} 00:17:31.133 }' 00:17:31.133 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.133 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.133 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:31.133 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.133 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.133 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:31.133 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.133 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.391 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.391 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.391 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.391 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.391 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:31.391 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:31.391 00:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:31.649 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:31.649 "name": "BaseBdev2", 00:17:31.649 "aliases": [ 00:17:31.649 "36547428-b917-4206-89bd-46d58f665b7e" 00:17:31.649 ], 00:17:31.649 "product_name": "Malloc disk", 00:17:31.649 "block_size": 512, 00:17:31.649 "num_blocks": 65536, 00:17:31.649 "uuid": "36547428-b917-4206-89bd-46d58f665b7e", 00:17:31.649 "assigned_rate_limits": { 00:17:31.649 "rw_ios_per_sec": 0, 00:17:31.649 "rw_mbytes_per_sec": 0, 00:17:31.649 "r_mbytes_per_sec": 0, 00:17:31.649 "w_mbytes_per_sec": 0 00:17:31.649 }, 00:17:31.649 "claimed": true, 00:17:31.649 "claim_type": "exclusive_write", 00:17:31.649 "zoned": false, 00:17:31.649 "supported_io_types": { 00:17:31.649 "read": true, 00:17:31.649 "write": true, 00:17:31.649 "unmap": true, 00:17:31.649 "flush": true, 00:17:31.649 "reset": true, 00:17:31.649 "nvme_admin": false, 00:17:31.649 "nvme_io": false, 00:17:31.649 "nvme_io_md": false, 00:17:31.649 "write_zeroes": true, 00:17:31.649 "zcopy": true, 00:17:31.649 "get_zone_info": false, 00:17:31.649 "zone_management": false, 00:17:31.649 "zone_append": false, 00:17:31.649 "compare": false, 00:17:31.649 "compare_and_write": false, 00:17:31.649 "abort": true, 00:17:31.649 "seek_hole": false, 00:17:31.649 "seek_data": false, 00:17:31.649 "copy": true, 00:17:31.649 "nvme_iov_md": false 00:17:31.649 }, 00:17:31.649 "memory_domains": [ 00:17:31.649 { 00:17:31.649 "dma_device_id": "system", 00:17:31.649 "dma_device_type": 1 00:17:31.649 }, 00:17:31.649 { 00:17:31.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.649 "dma_device_type": 2 00:17:31.649 } 00:17:31.649 ], 00:17:31.649 "driver_specific": {} 00:17:31.649 }' 00:17:31.649 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.649 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.649 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:31.649 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.649 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.908 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:31.908 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.908 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.908 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.908 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.908 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.908 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.908 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:32.166 [2024-07-25 00:43:54.748671] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.166 [2024-07-25 00:43:54.748914] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.166 [2024-07-25 00:43:54.749100] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.424 00:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.683 00:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.683 "name": "Existed_Raid", 00:17:32.683 "uuid": "d1f93dcc-dfa2-416e-9cee-789d21064856", 00:17:32.683 "strip_size_kb": 64, 00:17:32.683 "state": "offline", 00:17:32.683 "raid_level": "concat", 00:17:32.683 "superblock": true, 00:17:32.683 "num_base_bdevs": 2, 00:17:32.683 "num_base_bdevs_discovered": 1, 00:17:32.683 "num_base_bdevs_operational": 1, 00:17:32.683 "base_bdevs_list": [ 00:17:32.683 { 00:17:32.683 "name": null, 00:17:32.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.683 "is_configured": false, 00:17:32.683 "data_offset": 2048, 00:17:32.683 "data_size": 63488 00:17:32.683 }, 00:17:32.683 { 00:17:32.683 "name": "BaseBdev2", 00:17:32.683 "uuid": "36547428-b917-4206-89bd-46d58f665b7e", 00:17:32.683 "is_configured": true, 00:17:32.683 "data_offset": 2048, 00:17:32.683 "data_size": 63488 00:17:32.683 } 00:17:32.683 ] 00:17:32.683 }' 00:17:32.683 00:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.683 00:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.248 00:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:33.248 00:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:33.248 00:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.248 00:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:33.508 00:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:33.508 00:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.508 00:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:33.508 [2024-07-25 00:43:56.132168] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.508 [2024-07-25 00:43:56.132488] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:33.765 00:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:33.765 00:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:33.765 00:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.765 00:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 123456 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 123456 ']' 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 123456 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123456 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123456' 00:17:34.023 killing process with pid 123456 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 123456 00:17:34.023 [2024-07-25 00:43:56.570435] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.023 00:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 123456 00:17:34.023 [2024-07-25 00:43:56.570660] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.399 ************************************ 00:17:35.399 END TEST raid_state_function_test_sb 00:17:35.399 ************************************ 00:17:35.399 00:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:35.399 00:17:35.399 real 0m11.378s 00:17:35.399 user 0m19.275s 00:17:35.399 sys 0m1.683s 00:17:35.399 00:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.399 00:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.399 00:43:57 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:17:35.399 00:43:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:35.399 00:43:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.399 00:43:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.399 ************************************ 00:17:35.399 START TEST raid_superblock_test 00:17:35.399 ************************************ 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=123833 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 123833 /var/tmp/spdk-raid.sock 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 123833 ']' 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:35.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.399 00:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.658 [2024-07-25 00:43:58.094957] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:17:35.658 [2024-07-25 00:43:58.095403] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123833 ] 00:17:35.658 [2024-07-25 00:43:58.275952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.917 [2024-07-25 00:43:58.474901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.175 [2024-07-25 00:43:58.681611] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.434 00:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.434 00:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:36.434 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:36.434 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:36.434 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:36.434 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:36.434 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:36.434 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:36.434 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:36.434 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:36.434 00:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:36.692 malloc1 00:17:36.692 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.951 [2024-07-25 00:43:59.349648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:36.951 [2024-07-25 00:43:59.349969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.951 [2024-07-25 00:43:59.350040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:36.951 [2024-07-25 00:43:59.350146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.951 [2024-07-25 00:43:59.352482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.951 [2024-07-25 00:43:59.352648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:36.951 pt1 00:17:36.951 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:36.951 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:36.951 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:36.951 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:36.951 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:36.951 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:36.951 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:36.951 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:36.951 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:37.210 malloc2 00:17:37.210 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:37.210 [2024-07-25 00:43:59.803353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:37.210 [2024-07-25 00:43:59.803616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.210 [2024-07-25 00:43:59.803687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:37.210 [2024-07-25 00:43:59.803796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.210 [2024-07-25 00:43:59.806029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.210 [2024-07-25 00:43:59.806187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:37.210 pt2 00:17:37.210 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:37.210 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:37.210 00:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:17:37.469 [2024-07-25 00:43:59.999443] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:37.469 [2024-07-25 00:44:00.001888] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.469 [2024-07-25 00:44:00.002265] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:17:37.469 [2024-07-25 00:44:00.002389] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:37.469 [2024-07-25 00:44:00.002576] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:37.469 [2024-07-25 00:44:00.002968] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:17:37.469 [2024-07-25 00:44:00.003007] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:17:37.469 [2024-07-25 00:44:00.003247] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.469 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.728 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.728 "name": "raid_bdev1", 00:17:37.728 "uuid": "8745e344-0c2a-4edb-bc66-ee89047137ac", 00:17:37.728 "strip_size_kb": 64, 00:17:37.728 "state": "online", 00:17:37.728 "raid_level": "concat", 00:17:37.728 "superblock": true, 00:17:37.728 "num_base_bdevs": 2, 00:17:37.728 "num_base_bdevs_discovered": 2, 00:17:37.728 "num_base_bdevs_operational": 2, 00:17:37.728 "base_bdevs_list": [ 00:17:37.728 { 00:17:37.728 "name": "pt1", 00:17:37.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:37.728 "is_configured": true, 00:17:37.728 "data_offset": 2048, 00:17:37.729 "data_size": 63488 00:17:37.729 }, 00:17:37.729 { 00:17:37.729 "name": "pt2", 00:17:37.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.729 "is_configured": true, 00:17:37.729 "data_offset": 2048, 00:17:37.729 "data_size": 63488 00:17:37.729 } 00:17:37.729 ] 00:17:37.729 }' 00:17:37.729 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.729 00:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.296 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:38.296 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:38.296 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:38.296 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:38.296 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:38.296 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:38.296 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:38.296 00:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:38.555 [2024-07-25 00:44:00.999853] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.555 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:38.555 "name": "raid_bdev1", 00:17:38.555 "aliases": [ 00:17:38.555 "8745e344-0c2a-4edb-bc66-ee89047137ac" 00:17:38.555 ], 00:17:38.555 "product_name": "Raid Volume", 00:17:38.555 "block_size": 512, 00:17:38.555 "num_blocks": 126976, 00:17:38.555 "uuid": "8745e344-0c2a-4edb-bc66-ee89047137ac", 00:17:38.555 "assigned_rate_limits": { 00:17:38.555 "rw_ios_per_sec": 0, 00:17:38.555 "rw_mbytes_per_sec": 0, 00:17:38.555 "r_mbytes_per_sec": 0, 00:17:38.555 "w_mbytes_per_sec": 0 00:17:38.555 }, 00:17:38.555 "claimed": false, 00:17:38.555 "zoned": false, 00:17:38.555 "supported_io_types": { 00:17:38.555 "read": true, 00:17:38.555 "write": true, 00:17:38.555 "unmap": true, 00:17:38.555 "flush": true, 00:17:38.555 "reset": true, 00:17:38.555 "nvme_admin": false, 00:17:38.555 "nvme_io": false, 00:17:38.555 "nvme_io_md": false, 00:17:38.555 "write_zeroes": true, 00:17:38.555 "zcopy": false, 00:17:38.555 "get_zone_info": false, 00:17:38.555 "zone_management": false, 00:17:38.555 "zone_append": false, 00:17:38.555 "compare": false, 00:17:38.555 "compare_and_write": false, 00:17:38.555 "abort": false, 00:17:38.555 "seek_hole": false, 00:17:38.555 "seek_data": false, 00:17:38.555 "copy": false, 00:17:38.555 "nvme_iov_md": false 00:17:38.555 }, 00:17:38.555 "memory_domains": [ 00:17:38.555 { 00:17:38.555 "dma_device_id": "system", 00:17:38.555 "dma_device_type": 1 00:17:38.555 }, 00:17:38.555 { 00:17:38.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.555 "dma_device_type": 2 00:17:38.555 }, 00:17:38.555 { 00:17:38.555 "dma_device_id": "system", 00:17:38.555 "dma_device_type": 1 00:17:38.555 }, 00:17:38.555 { 00:17:38.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.555 "dma_device_type": 2 00:17:38.555 } 00:17:38.555 ], 00:17:38.555 "driver_specific": { 00:17:38.555 "raid": { 00:17:38.555 "uuid": "8745e344-0c2a-4edb-bc66-ee89047137ac", 00:17:38.555 "strip_size_kb": 64, 00:17:38.555 "state": "online", 00:17:38.555 "raid_level": "concat", 00:17:38.555 "superblock": true, 00:17:38.555 "num_base_bdevs": 2, 00:17:38.555 "num_base_bdevs_discovered": 2, 00:17:38.555 "num_base_bdevs_operational": 2, 00:17:38.555 "base_bdevs_list": [ 00:17:38.555 { 00:17:38.555 "name": "pt1", 00:17:38.555 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:38.555 "is_configured": true, 00:17:38.555 "data_offset": 2048, 00:17:38.555 "data_size": 63488 00:17:38.555 }, 00:17:38.555 { 00:17:38.555 "name": "pt2", 00:17:38.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.555 "is_configured": true, 00:17:38.555 "data_offset": 2048, 00:17:38.555 "data_size": 63488 00:17:38.555 } 00:17:38.555 ] 00:17:38.555 } 00:17:38.555 } 00:17:38.555 }' 00:17:38.555 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:38.555 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:38.555 pt2' 00:17:38.555 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:38.555 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:38.555 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:38.813 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:38.813 "name": "pt1", 00:17:38.813 "aliases": [ 00:17:38.813 "00000000-0000-0000-0000-000000000001" 00:17:38.813 ], 00:17:38.813 "product_name": "passthru", 00:17:38.813 "block_size": 512, 00:17:38.813 "num_blocks": 65536, 00:17:38.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:38.813 "assigned_rate_limits": { 00:17:38.813 "rw_ios_per_sec": 0, 00:17:38.813 "rw_mbytes_per_sec": 0, 00:17:38.813 "r_mbytes_per_sec": 0, 00:17:38.813 "w_mbytes_per_sec": 0 00:17:38.813 }, 00:17:38.813 "claimed": true, 00:17:38.813 "claim_type": "exclusive_write", 00:17:38.813 "zoned": false, 00:17:38.813 "supported_io_types": { 00:17:38.813 "read": true, 00:17:38.813 "write": true, 00:17:38.813 "unmap": true, 00:17:38.814 "flush": true, 00:17:38.814 "reset": true, 00:17:38.814 "nvme_admin": false, 00:17:38.814 "nvme_io": false, 00:17:38.814 "nvme_io_md": false, 00:17:38.814 "write_zeroes": true, 00:17:38.814 "zcopy": true, 00:17:38.814 "get_zone_info": false, 00:17:38.814 "zone_management": false, 00:17:38.814 "zone_append": false, 00:17:38.814 "compare": false, 00:17:38.814 "compare_and_write": false, 00:17:38.814 "abort": true, 00:17:38.814 "seek_hole": false, 00:17:38.814 "seek_data": false, 00:17:38.814 "copy": true, 00:17:38.814 "nvme_iov_md": false 00:17:38.814 }, 00:17:38.814 "memory_domains": [ 00:17:38.814 { 00:17:38.814 "dma_device_id": "system", 00:17:38.814 "dma_device_type": 1 00:17:38.814 }, 00:17:38.814 { 00:17:38.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.814 "dma_device_type": 2 00:17:38.814 } 00:17:38.814 ], 00:17:38.814 "driver_specific": { 00:17:38.814 "passthru": { 00:17:38.814 "name": "pt1", 00:17:38.814 "base_bdev_name": "malloc1" 00:17:38.814 } 00:17:38.814 } 00:17:38.814 }' 00:17:38.814 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.814 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.814 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:38.814 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.814 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.814 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:38.814 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.814 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.072 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:39.072 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.072 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.072 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:39.072 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:39.072 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:39.072 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:39.331 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:39.331 "name": "pt2", 00:17:39.331 "aliases": [ 00:17:39.331 "00000000-0000-0000-0000-000000000002" 00:17:39.331 ], 00:17:39.331 "product_name": "passthru", 00:17:39.331 "block_size": 512, 00:17:39.331 "num_blocks": 65536, 00:17:39.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.331 "assigned_rate_limits": { 00:17:39.331 "rw_ios_per_sec": 0, 00:17:39.331 "rw_mbytes_per_sec": 0, 00:17:39.331 "r_mbytes_per_sec": 0, 00:17:39.331 "w_mbytes_per_sec": 0 00:17:39.331 }, 00:17:39.331 "claimed": true, 00:17:39.331 "claim_type": "exclusive_write", 00:17:39.331 "zoned": false, 00:17:39.331 "supported_io_types": { 00:17:39.331 "read": true, 00:17:39.331 "write": true, 00:17:39.331 "unmap": true, 00:17:39.331 "flush": true, 00:17:39.331 "reset": true, 00:17:39.331 "nvme_admin": false, 00:17:39.331 "nvme_io": false, 00:17:39.331 "nvme_io_md": false, 00:17:39.331 "write_zeroes": true, 00:17:39.331 "zcopy": true, 00:17:39.331 "get_zone_info": false, 00:17:39.331 "zone_management": false, 00:17:39.331 "zone_append": false, 00:17:39.331 "compare": false, 00:17:39.331 "compare_and_write": false, 00:17:39.331 "abort": true, 00:17:39.331 "seek_hole": false, 00:17:39.331 "seek_data": false, 00:17:39.331 "copy": true, 00:17:39.331 "nvme_iov_md": false 00:17:39.331 }, 00:17:39.331 "memory_domains": [ 00:17:39.331 { 00:17:39.331 "dma_device_id": "system", 00:17:39.331 "dma_device_type": 1 00:17:39.331 }, 00:17:39.331 { 00:17:39.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.331 "dma_device_type": 2 00:17:39.331 } 00:17:39.331 ], 00:17:39.331 "driver_specific": { 00:17:39.331 "passthru": { 00:17:39.331 "name": "pt2", 00:17:39.331 "base_bdev_name": "malloc2" 00:17:39.331 } 00:17:39.331 } 00:17:39.331 }' 00:17:39.331 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.331 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.331 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:39.331 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.331 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.331 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:39.331 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.331 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.590 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:39.590 00:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.590 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.590 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:39.590 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:39.590 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:39.590 [2024-07-25 00:44:02.232110] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.850 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8745e344-0c2a-4edb-bc66-ee89047137ac 00:17:39.850 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8745e344-0c2a-4edb-bc66-ee89047137ac ']' 00:17:39.850 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:39.850 [2024-07-25 00:44:02.411909] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.850 [2024-07-25 00:44:02.412155] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.850 [2024-07-25 00:44:02.412348] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.850 [2024-07-25 00:44:02.412442] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.850 [2024-07-25 00:44:02.412652] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:17:39.850 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.850 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:40.109 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:40.109 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:40.109 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.109 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:40.368 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.368 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:40.368 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:40.368 00:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:40.628 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:40.889 [2024-07-25 00:44:03.484145] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:40.889 [2024-07-25 00:44:03.486356] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:40.889 [2024-07-25 00:44:03.486546] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:40.889 [2024-07-25 00:44:03.486742] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:40.889 [2024-07-25 00:44:03.486855] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.889 [2024-07-25 00:44:03.486891] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:17:40.889 request: 00:17:40.889 { 00:17:40.889 "name": "raid_bdev1", 00:17:40.889 "raid_level": "concat", 00:17:40.889 "base_bdevs": [ 00:17:40.889 "malloc1", 00:17:40.889 "malloc2" 00:17:40.889 ], 00:17:40.889 "strip_size_kb": 64, 00:17:40.889 "superblock": false, 00:17:40.889 "method": "bdev_raid_create", 00:17:40.889 "req_id": 1 00:17:40.889 } 00:17:40.889 Got JSON-RPC error response 00:17:40.889 response: 00:17:40.889 { 00:17:40.889 "code": -17, 00:17:40.889 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:40.889 } 00:17:40.889 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:40.889 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:40.889 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:40.889 00:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:40.889 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:40.889 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.148 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:41.148 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:41.148 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:41.406 [2024-07-25 00:44:03.964146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:41.406 [2024-07-25 00:44:03.964469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.406 [2024-07-25 00:44:03.964530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:41.406 [2024-07-25 00:44:03.964621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.406 [2024-07-25 00:44:03.966923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.406 [2024-07-25 00:44:03.967118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:41.406 [2024-07-25 00:44:03.967311] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:41.406 [2024-07-25 00:44:03.967441] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:41.406 pt1 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.406 00:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.665 00:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.665 "name": "raid_bdev1", 00:17:41.665 "uuid": "8745e344-0c2a-4edb-bc66-ee89047137ac", 00:17:41.665 "strip_size_kb": 64, 00:17:41.665 "state": "configuring", 00:17:41.665 "raid_level": "concat", 00:17:41.665 "superblock": true, 00:17:41.665 "num_base_bdevs": 2, 00:17:41.665 "num_base_bdevs_discovered": 1, 00:17:41.665 "num_base_bdevs_operational": 2, 00:17:41.665 "base_bdevs_list": [ 00:17:41.665 { 00:17:41.665 "name": "pt1", 00:17:41.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.665 "is_configured": true, 00:17:41.665 "data_offset": 2048, 00:17:41.665 "data_size": 63488 00:17:41.665 }, 00:17:41.665 { 00:17:41.665 "name": null, 00:17:41.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.665 "is_configured": false, 00:17:41.665 "data_offset": 2048, 00:17:41.665 "data_size": 63488 00:17:41.665 } 00:17:41.665 ] 00:17:41.665 }' 00:17:41.665 00:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.665 00:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.233 00:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:42.233 00:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:42.233 00:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:42.233 00:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.493 [2024-07-25 00:44:05.016311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.493 [2024-07-25 00:44:05.016654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.493 [2024-07-25 00:44:05.016720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:42.493 [2024-07-25 00:44:05.016811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.493 [2024-07-25 00:44:05.017292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.493 [2024-07-25 00:44:05.017448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.493 [2024-07-25 00:44:05.017636] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:42.493 [2024-07-25 00:44:05.017736] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.493 [2024-07-25 00:44:05.017884] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:17:42.493 [2024-07-25 00:44:05.018037] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:42.493 [2024-07-25 00:44:05.018173] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:42.493 [2024-07-25 00:44:05.018659] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:17:42.493 [2024-07-25 00:44:05.018777] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:17:42.493 [2024-07-25 00:44:05.018983] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.493 pt2 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.493 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.752 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:42.752 "name": "raid_bdev1", 00:17:42.752 "uuid": "8745e344-0c2a-4edb-bc66-ee89047137ac", 00:17:42.752 "strip_size_kb": 64, 00:17:42.752 "state": "online", 00:17:42.752 "raid_level": "concat", 00:17:42.752 "superblock": true, 00:17:42.752 "num_base_bdevs": 2, 00:17:42.752 "num_base_bdevs_discovered": 2, 00:17:42.752 "num_base_bdevs_operational": 2, 00:17:42.752 "base_bdevs_list": [ 00:17:42.752 { 00:17:42.752 "name": "pt1", 00:17:42.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.752 "is_configured": true, 00:17:42.752 "data_offset": 2048, 00:17:42.752 "data_size": 63488 00:17:42.752 }, 00:17:42.752 { 00:17:42.752 "name": "pt2", 00:17:42.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.752 "is_configured": true, 00:17:42.752 "data_offset": 2048, 00:17:42.752 "data_size": 63488 00:17:42.752 } 00:17:42.752 ] 00:17:42.752 }' 00:17:42.752 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:42.752 00:44:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.382 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:43.382 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:43.382 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:43.382 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:43.382 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:43.382 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:43.382 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:43.382 00:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:43.641 [2024-07-25 00:44:06.040767] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.641 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:43.641 "name": "raid_bdev1", 00:17:43.641 "aliases": [ 00:17:43.641 "8745e344-0c2a-4edb-bc66-ee89047137ac" 00:17:43.641 ], 00:17:43.641 "product_name": "Raid Volume", 00:17:43.641 "block_size": 512, 00:17:43.641 "num_blocks": 126976, 00:17:43.641 "uuid": "8745e344-0c2a-4edb-bc66-ee89047137ac", 00:17:43.641 "assigned_rate_limits": { 00:17:43.641 "rw_ios_per_sec": 0, 00:17:43.641 "rw_mbytes_per_sec": 0, 00:17:43.641 "r_mbytes_per_sec": 0, 00:17:43.641 "w_mbytes_per_sec": 0 00:17:43.641 }, 00:17:43.641 "claimed": false, 00:17:43.641 "zoned": false, 00:17:43.641 "supported_io_types": { 00:17:43.641 "read": true, 00:17:43.641 "write": true, 00:17:43.641 "unmap": true, 00:17:43.641 "flush": true, 00:17:43.641 "reset": true, 00:17:43.641 "nvme_admin": false, 00:17:43.641 "nvme_io": false, 00:17:43.641 "nvme_io_md": false, 00:17:43.641 "write_zeroes": true, 00:17:43.641 "zcopy": false, 00:17:43.641 "get_zone_info": false, 00:17:43.641 "zone_management": false, 00:17:43.641 "zone_append": false, 00:17:43.641 "compare": false, 00:17:43.641 "compare_and_write": false, 00:17:43.641 "abort": false, 00:17:43.641 "seek_hole": false, 00:17:43.641 "seek_data": false, 00:17:43.641 "copy": false, 00:17:43.641 "nvme_iov_md": false 00:17:43.641 }, 00:17:43.641 "memory_domains": [ 00:17:43.641 { 00:17:43.641 "dma_device_id": "system", 00:17:43.641 "dma_device_type": 1 00:17:43.641 }, 00:17:43.641 { 00:17:43.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.641 "dma_device_type": 2 00:17:43.641 }, 00:17:43.641 { 00:17:43.641 "dma_device_id": "system", 00:17:43.641 "dma_device_type": 1 00:17:43.641 }, 00:17:43.641 { 00:17:43.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.641 "dma_device_type": 2 00:17:43.641 } 00:17:43.641 ], 00:17:43.641 "driver_specific": { 00:17:43.641 "raid": { 00:17:43.641 "uuid": "8745e344-0c2a-4edb-bc66-ee89047137ac", 00:17:43.641 "strip_size_kb": 64, 00:17:43.641 "state": "online", 00:17:43.641 "raid_level": "concat", 00:17:43.641 "superblock": true, 00:17:43.641 "num_base_bdevs": 2, 00:17:43.641 "num_base_bdevs_discovered": 2, 00:17:43.641 "num_base_bdevs_operational": 2, 00:17:43.641 "base_bdevs_list": [ 00:17:43.641 { 00:17:43.641 "name": "pt1", 00:17:43.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.641 "is_configured": true, 00:17:43.641 "data_offset": 2048, 00:17:43.641 "data_size": 63488 00:17:43.641 }, 00:17:43.641 { 00:17:43.641 "name": "pt2", 00:17:43.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.641 "is_configured": true, 00:17:43.641 "data_offset": 2048, 00:17:43.641 "data_size": 63488 00:17:43.641 } 00:17:43.641 ] 00:17:43.641 } 00:17:43.641 } 00:17:43.641 }' 00:17:43.641 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.641 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:43.641 pt2' 00:17:43.641 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:43.642 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:43.642 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:43.901 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:43.901 "name": "pt1", 00:17:43.901 "aliases": [ 00:17:43.901 "00000000-0000-0000-0000-000000000001" 00:17:43.901 ], 00:17:43.901 "product_name": "passthru", 00:17:43.901 "block_size": 512, 00:17:43.901 "num_blocks": 65536, 00:17:43.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.901 "assigned_rate_limits": { 00:17:43.901 "rw_ios_per_sec": 0, 00:17:43.901 "rw_mbytes_per_sec": 0, 00:17:43.901 "r_mbytes_per_sec": 0, 00:17:43.901 "w_mbytes_per_sec": 0 00:17:43.901 }, 00:17:43.901 "claimed": true, 00:17:43.901 "claim_type": "exclusive_write", 00:17:43.901 "zoned": false, 00:17:43.901 "supported_io_types": { 00:17:43.901 "read": true, 00:17:43.901 "write": true, 00:17:43.901 "unmap": true, 00:17:43.901 "flush": true, 00:17:43.901 "reset": true, 00:17:43.901 "nvme_admin": false, 00:17:43.901 "nvme_io": false, 00:17:43.901 "nvme_io_md": false, 00:17:43.901 "write_zeroes": true, 00:17:43.901 "zcopy": true, 00:17:43.901 "get_zone_info": false, 00:17:43.901 "zone_management": false, 00:17:43.901 "zone_append": false, 00:17:43.901 "compare": false, 00:17:43.901 "compare_and_write": false, 00:17:43.901 "abort": true, 00:17:43.901 "seek_hole": false, 00:17:43.901 "seek_data": false, 00:17:43.901 "copy": true, 00:17:43.901 "nvme_iov_md": false 00:17:43.901 }, 00:17:43.901 "memory_domains": [ 00:17:43.901 { 00:17:43.901 "dma_device_id": "system", 00:17:43.901 "dma_device_type": 1 00:17:43.901 }, 00:17:43.901 { 00:17:43.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.901 "dma_device_type": 2 00:17:43.901 } 00:17:43.901 ], 00:17:43.901 "driver_specific": { 00:17:43.901 "passthru": { 00:17:43.901 "name": "pt1", 00:17:43.901 "base_bdev_name": "malloc1" 00:17:43.901 } 00:17:43.901 } 00:17:43.901 }' 00:17:43.901 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.901 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.901 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:43.901 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.901 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.901 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:43.901 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.160 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.160 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:44.160 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.160 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.160 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:44.160 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:44.160 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:44.160 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:44.418 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:44.418 "name": "pt2", 00:17:44.418 "aliases": [ 00:17:44.418 "00000000-0000-0000-0000-000000000002" 00:17:44.418 ], 00:17:44.418 "product_name": "passthru", 00:17:44.418 "block_size": 512, 00:17:44.418 "num_blocks": 65536, 00:17:44.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.419 "assigned_rate_limits": { 00:17:44.419 "rw_ios_per_sec": 0, 00:17:44.419 "rw_mbytes_per_sec": 0, 00:17:44.419 "r_mbytes_per_sec": 0, 00:17:44.419 "w_mbytes_per_sec": 0 00:17:44.419 }, 00:17:44.419 "claimed": true, 00:17:44.419 "claim_type": "exclusive_write", 00:17:44.419 "zoned": false, 00:17:44.419 "supported_io_types": { 00:17:44.419 "read": true, 00:17:44.419 "write": true, 00:17:44.419 "unmap": true, 00:17:44.419 "flush": true, 00:17:44.419 "reset": true, 00:17:44.419 "nvme_admin": false, 00:17:44.419 "nvme_io": false, 00:17:44.419 "nvme_io_md": false, 00:17:44.419 "write_zeroes": true, 00:17:44.419 "zcopy": true, 00:17:44.419 "get_zone_info": false, 00:17:44.419 "zone_management": false, 00:17:44.419 "zone_append": false, 00:17:44.419 "compare": false, 00:17:44.419 "compare_and_write": false, 00:17:44.419 "abort": true, 00:17:44.419 "seek_hole": false, 00:17:44.419 "seek_data": false, 00:17:44.419 "copy": true, 00:17:44.419 "nvme_iov_md": false 00:17:44.419 }, 00:17:44.419 "memory_domains": [ 00:17:44.419 { 00:17:44.419 "dma_device_id": "system", 00:17:44.419 "dma_device_type": 1 00:17:44.419 }, 00:17:44.419 { 00:17:44.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.419 "dma_device_type": 2 00:17:44.419 } 00:17:44.419 ], 00:17:44.419 "driver_specific": { 00:17:44.419 "passthru": { 00:17:44.419 "name": "pt2", 00:17:44.419 "base_bdev_name": "malloc2" 00:17:44.419 } 00:17:44.419 } 00:17:44.419 }' 00:17:44.419 00:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.419 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.677 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:44.677 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.677 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.677 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:44.677 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.677 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.677 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:44.677 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.677 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:44.936 [2024-07-25 00:44:07.521023] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8745e344-0c2a-4edb-bc66-ee89047137ac '!=' 8745e344-0c2a-4edb-bc66-ee89047137ac ']' 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 123833 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 123833 ']' 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 123833 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123833 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123833' 00:17:44.936 killing process with pid 123833 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 123833 00:17:44.936 [2024-07-25 00:44:07.566875] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.936 00:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 123833 00:17:44.936 [2024-07-25 00:44:07.567064] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.936 [2024-07-25 00:44:07.567234] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.936 [2024-07-25 00:44:07.567301] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:17:45.195 [2024-07-25 00:44:07.773624] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.569 00:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:46.569 00:17:46.569 real 0m11.129s 00:17:46.569 user 0m18.895s 00:17:46.569 sys 0m1.556s 00:17:46.569 00:44:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.569 00:44:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.569 ************************************ 00:17:46.569 END TEST raid_superblock_test 00:17:46.569 ************************************ 00:17:46.569 00:44:09 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:17:46.569 00:44:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:46.569 00:44:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.569 00:44:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.569 ************************************ 00:17:46.569 START TEST raid_read_error_test 00:17:46.569 ************************************ 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ZrXyhkoauG 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=124202 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 124202 /var/tmp/spdk-raid.sock 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 124202 ']' 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:46.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.569 00:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.827 [2024-07-25 00:44:09.273499] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:17:46.827 [2024-07-25 00:44:09.273665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124202 ] 00:17:46.827 [2024-07-25 00:44:09.430184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.085 [2024-07-25 00:44:09.626456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.342 [2024-07-25 00:44:09.820282] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.600 00:44:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.600 00:44:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:47.600 00:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:47.600 00:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:47.858 BaseBdev1_malloc 00:17:47.858 00:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:48.116 true 00:17:48.116 00:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:48.373 [2024-07-25 00:44:10.923828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:48.373 [2024-07-25 00:44:10.923951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.373 [2024-07-25 00:44:10.923995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:48.373 [2024-07-25 00:44:10.924018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.373 [2024-07-25 00:44:10.926719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.373 [2024-07-25 00:44:10.926778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.373 BaseBdev1 00:17:48.373 00:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:48.373 00:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:48.630 BaseBdev2_malloc 00:17:48.886 00:44:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:49.144 true 00:17:49.144 00:44:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:49.401 [2024-07-25 00:44:11.875937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:49.401 [2024-07-25 00:44:11.876058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.401 [2024-07-25 00:44:11.876105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:49.401 [2024-07-25 00:44:11.876128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.401 [2024-07-25 00:44:11.878753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.401 [2024-07-25 00:44:11.878819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:49.401 BaseBdev2 00:17:49.401 00:44:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:49.659 [2024-07-25 00:44:12.176040] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.659 [2024-07-25 00:44:12.178410] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.659 [2024-07-25 00:44:12.178688] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:17:49.659 [2024-07-25 00:44:12.178704] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:49.659 [2024-07-25 00:44:12.178826] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:49.659 [2024-07-25 00:44:12.179189] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:17:49.659 [2024-07-25 00:44:12.179212] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:17:49.659 [2024-07-25 00:44:12.179386] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.659 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.916 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.916 "name": "raid_bdev1", 00:17:49.916 "uuid": "969f8a3b-cd9c-4961-a573-0f64fcda67e1", 00:17:49.916 "strip_size_kb": 64, 00:17:49.916 "state": "online", 00:17:49.916 "raid_level": "concat", 00:17:49.916 "superblock": true, 00:17:49.916 "num_base_bdevs": 2, 00:17:49.916 "num_base_bdevs_discovered": 2, 00:17:49.916 "num_base_bdevs_operational": 2, 00:17:49.916 "base_bdevs_list": [ 00:17:49.916 { 00:17:49.916 "name": "BaseBdev1", 00:17:49.916 "uuid": "f502f9d5-c73c-543c-a708-4b8645e71e47", 00:17:49.916 "is_configured": true, 00:17:49.916 "data_offset": 2048, 00:17:49.916 "data_size": 63488 00:17:49.916 }, 00:17:49.916 { 00:17:49.916 "name": "BaseBdev2", 00:17:49.916 "uuid": "eaea5592-626d-56be-aa20-cd9e9e98400c", 00:17:49.916 "is_configured": true, 00:17:49.916 "data_offset": 2048, 00:17:49.916 "data_size": 63488 00:17:49.916 } 00:17:49.916 ] 00:17:49.916 }' 00:17:49.916 00:44:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.916 00:44:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.853 00:44:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:50.853 00:44:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:50.853 [2024-07-25 00:44:13.325683] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:51.788 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.047 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.319 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.319 "name": "raid_bdev1", 00:17:52.319 "uuid": "969f8a3b-cd9c-4961-a573-0f64fcda67e1", 00:17:52.319 "strip_size_kb": 64, 00:17:52.319 "state": "online", 00:17:52.319 "raid_level": "concat", 00:17:52.319 "superblock": true, 00:17:52.319 "num_base_bdevs": 2, 00:17:52.319 "num_base_bdevs_discovered": 2, 00:17:52.319 "num_base_bdevs_operational": 2, 00:17:52.319 "base_bdevs_list": [ 00:17:52.319 { 00:17:52.319 "name": "BaseBdev1", 00:17:52.319 "uuid": "f502f9d5-c73c-543c-a708-4b8645e71e47", 00:17:52.319 "is_configured": true, 00:17:52.319 "data_offset": 2048, 00:17:52.319 "data_size": 63488 00:17:52.319 }, 00:17:52.319 { 00:17:52.319 "name": "BaseBdev2", 00:17:52.319 "uuid": "eaea5592-626d-56be-aa20-cd9e9e98400c", 00:17:52.319 "is_configured": true, 00:17:52.319 "data_offset": 2048, 00:17:52.319 "data_size": 63488 00:17:52.319 } 00:17:52.319 ] 00:17:52.319 }' 00:17:52.319 00:44:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.319 00:44:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.900 00:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:53.158 [2024-07-25 00:44:15.766413] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.158 [2024-07-25 00:44:15.766460] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.158 [2024-07-25 00:44:15.768961] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.158 [2024-07-25 00:44:15.769019] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.158 [2024-07-25 00:44:15.769055] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.158 [2024-07-25 00:44:15.769064] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:17:53.158 0 00:17:53.158 00:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 124202 00:17:53.158 00:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 124202 ']' 00:17:53.158 00:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 124202 00:17:53.158 00:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:53.158 00:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.158 00:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124202 00:17:53.417 00:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:53.417 killing process with pid 124202 00:17:53.417 00:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:53.417 00:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124202' 00:17:53.417 00:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 124202 00:17:53.417 00:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 124202 00:17:53.417 [2024-07-25 00:44:15.822699] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.417 [2024-07-25 00:44:15.947635] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.791 00:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ZrXyhkoauG 00:17:54.791 00:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:54.791 00:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:54.791 00:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:17:54.791 00:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:54.791 00:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:54.791 00:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:54.791 00:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:17:54.791 00:17:54.791 real 0m8.117s 00:17:54.791 user 0m12.203s 00:17:54.791 sys 0m1.046s 00:17:54.791 00:44:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.791 00:44:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.791 ************************************ 00:17:54.791 END TEST raid_read_error_test 00:17:54.791 ************************************ 00:17:54.791 00:44:17 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:17:54.791 00:44:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:54.791 00:44:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.791 00:44:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.791 ************************************ 00:17:54.791 START TEST raid_write_error_test 00:17:54.791 ************************************ 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.CK1YT13o0L 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=124405 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 124405 /var/tmp/spdk-raid.sock 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 124405 ']' 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.791 00:44:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.049 [2024-07-25 00:44:17.458299] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:17:55.049 [2024-07-25 00:44:17.458564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124405 ] 00:17:55.049 [2024-07-25 00:44:17.635622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.306 [2024-07-25 00:44:17.881517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.564 [2024-07-25 00:44:18.124791] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.129 00:44:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.129 00:44:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:56.129 00:44:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:56.129 00:44:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:56.387 BaseBdev1_malloc 00:17:56.387 00:44:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:56.644 true 00:17:56.644 00:44:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:56.902 [2024-07-25 00:44:19.524927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:56.902 [2024-07-25 00:44:19.525080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.902 [2024-07-25 00:44:19.525134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:56.902 [2024-07-25 00:44:19.525166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.902 [2024-07-25 00:44:19.528152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.902 [2024-07-25 00:44:19.528220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:56.902 BaseBdev1 00:17:56.902 00:44:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:56.902 00:44:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:57.465 BaseBdev2_malloc 00:17:57.465 00:44:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:57.722 true 00:17:57.722 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:57.980 [2024-07-25 00:44:20.412921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:57.980 [2024-07-25 00:44:20.413070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.980 [2024-07-25 00:44:20.413139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:57.980 [2024-07-25 00:44:20.413174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.980 [2024-07-25 00:44:20.415962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.980 [2024-07-25 00:44:20.416029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:57.980 BaseBdev2 00:17:57.980 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:58.237 [2024-07-25 00:44:20.685064] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.237 [2024-07-25 00:44:20.687361] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.237 [2024-07-25 00:44:20.687627] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:17:58.237 [2024-07-25 00:44:20.687649] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:58.237 [2024-07-25 00:44:20.687791] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:58.237 [2024-07-25 00:44:20.688184] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:17:58.237 [2024-07-25 00:44:20.688206] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:17:58.237 [2024-07-25 00:44:20.688364] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.237 00:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.494 00:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.494 "name": "raid_bdev1", 00:17:58.494 "uuid": "b29e9ae4-7f4d-4cca-8554-5a57783d07d1", 00:17:58.494 "strip_size_kb": 64, 00:17:58.494 "state": "online", 00:17:58.494 "raid_level": "concat", 00:17:58.494 "superblock": true, 00:17:58.494 "num_base_bdevs": 2, 00:17:58.494 "num_base_bdevs_discovered": 2, 00:17:58.494 "num_base_bdevs_operational": 2, 00:17:58.494 "base_bdevs_list": [ 00:17:58.494 { 00:17:58.494 "name": "BaseBdev1", 00:17:58.494 "uuid": "0e811cf1-7a6f-540c-b552-a3d18a8d7fe0", 00:17:58.494 "is_configured": true, 00:17:58.494 "data_offset": 2048, 00:17:58.494 "data_size": 63488 00:17:58.494 }, 00:17:58.494 { 00:17:58.494 "name": "BaseBdev2", 00:17:58.494 "uuid": "fed134c8-bca4-59ab-900b-53248e9e86e5", 00:17:58.494 "is_configured": true, 00:17:58.494 "data_offset": 2048, 00:17:58.494 "data_size": 63488 00:17:58.494 } 00:17:58.494 ] 00:17:58.494 }' 00:17:58.494 00:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.494 00:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.058 00:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:59.058 00:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:59.315 [2024-07-25 00:44:21.714846] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.248 00:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.506 00:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.506 "name": "raid_bdev1", 00:18:00.506 "uuid": "b29e9ae4-7f4d-4cca-8554-5a57783d07d1", 00:18:00.506 "strip_size_kb": 64, 00:18:00.506 "state": "online", 00:18:00.506 "raid_level": "concat", 00:18:00.506 "superblock": true, 00:18:00.506 "num_base_bdevs": 2, 00:18:00.506 "num_base_bdevs_discovered": 2, 00:18:00.506 "num_base_bdevs_operational": 2, 00:18:00.506 "base_bdevs_list": [ 00:18:00.506 { 00:18:00.506 "name": "BaseBdev1", 00:18:00.506 "uuid": "0e811cf1-7a6f-540c-b552-a3d18a8d7fe0", 00:18:00.506 "is_configured": true, 00:18:00.506 "data_offset": 2048, 00:18:00.506 "data_size": 63488 00:18:00.506 }, 00:18:00.506 { 00:18:00.506 "name": "BaseBdev2", 00:18:00.506 "uuid": "fed134c8-bca4-59ab-900b-53248e9e86e5", 00:18:00.506 "is_configured": true, 00:18:00.506 "data_offset": 2048, 00:18:00.506 "data_size": 63488 00:18:00.506 } 00:18:00.506 ] 00:18:00.506 }' 00:18:00.506 00:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.506 00:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:01.073 [2024-07-25 00:44:23.656517] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.073 [2024-07-25 00:44:23.656567] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.073 [2024-07-25 00:44:23.659124] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.073 [2024-07-25 00:44:23.659172] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.073 [2024-07-25 00:44:23.659204] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.073 [2024-07-25 00:44:23.659213] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:18:01.073 0 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 124405 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 124405 ']' 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 124405 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124405 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:01.073 killing process with pid 124405 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124405' 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 124405 00:18:01.073 [2024-07-25 00:44:23.707459] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.073 00:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 124405 00:18:01.331 [2024-07-25 00:44:23.832857] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:02.707 00:44:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.CK1YT13o0L 00:18:02.707 00:44:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:02.707 00:44:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:02.707 00:44:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:18:02.707 00:44:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:18:02.707 00:44:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:02.707 00:44:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:02.707 00:44:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:18:02.707 00:18:02.707 real 0m7.778s 00:18:02.707 user 0m11.689s 00:18:02.707 sys 0m0.915s 00:18:02.707 00:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:02.707 00:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 ************************************ 00:18:02.707 END TEST raid_write_error_test 00:18:02.707 ************************************ 00:18:02.707 00:44:25 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:02.707 00:44:25 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:18:02.707 00:44:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:02.707 00:44:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.707 00:44:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 ************************************ 00:18:02.707 START TEST raid_state_function_test 00:18:02.707 ************************************ 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=124603 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 124603' 00:18:02.707 Process raid pid: 124603 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 124603 /var/tmp/spdk-raid.sock 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 124603 ']' 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:02.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.707 00:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 [2024-07-25 00:44:25.269843] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:18:02.707 [2024-07-25 00:44:25.270012] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.967 [2024-07-25 00:44:25.428222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.243 [2024-07-25 00:44:25.629394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.243 [2024-07-25 00:44:25.831266] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.818 00:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.818 00:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:18:03.818 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:04.075 [2024-07-25 00:44:26.564888] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:04.075 [2024-07-25 00:44:26.564987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:04.075 [2024-07-25 00:44:26.565013] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.075 [2024-07-25 00:44:26.565065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.075 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.333 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:04.333 "name": "Existed_Raid", 00:18:04.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.333 "strip_size_kb": 0, 00:18:04.333 "state": "configuring", 00:18:04.333 "raid_level": "raid1", 00:18:04.333 "superblock": false, 00:18:04.333 "num_base_bdevs": 2, 00:18:04.333 "num_base_bdevs_discovered": 0, 00:18:04.333 "num_base_bdevs_operational": 2, 00:18:04.333 "base_bdevs_list": [ 00:18:04.333 { 00:18:04.333 "name": "BaseBdev1", 00:18:04.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.333 "is_configured": false, 00:18:04.333 "data_offset": 0, 00:18:04.333 "data_size": 0 00:18:04.333 }, 00:18:04.333 { 00:18:04.333 "name": "BaseBdev2", 00:18:04.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.333 "is_configured": false, 00:18:04.333 "data_offset": 0, 00:18:04.333 "data_size": 0 00:18:04.333 } 00:18:04.333 ] 00:18:04.333 }' 00:18:04.333 00:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:04.333 00:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.902 00:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:04.903 [2024-07-25 00:44:27.544899] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.903 [2024-07-25 00:44:27.544945] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:05.162 00:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:05.162 [2024-07-25 00:44:27.724942] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:05.162 [2024-07-25 00:44:27.725026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:05.162 [2024-07-25 00:44:27.725036] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.162 [2024-07-25 00:44:27.725060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.162 00:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:05.421 [2024-07-25 00:44:27.985744] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.421 BaseBdev1 00:18:05.421 00:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:05.421 00:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:05.421 00:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:05.421 00:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:05.421 00:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:05.421 00:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:05.421 00:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:05.680 00:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:05.939 [ 00:18:05.939 { 00:18:05.939 "name": "BaseBdev1", 00:18:05.939 "aliases": [ 00:18:05.939 "6d045acb-2800-4588-8967-26484716cee6" 00:18:05.939 ], 00:18:05.939 "product_name": "Malloc disk", 00:18:05.939 "block_size": 512, 00:18:05.939 "num_blocks": 65536, 00:18:05.939 "uuid": "6d045acb-2800-4588-8967-26484716cee6", 00:18:05.939 "assigned_rate_limits": { 00:18:05.939 "rw_ios_per_sec": 0, 00:18:05.939 "rw_mbytes_per_sec": 0, 00:18:05.939 "r_mbytes_per_sec": 0, 00:18:05.939 "w_mbytes_per_sec": 0 00:18:05.939 }, 00:18:05.939 "claimed": true, 00:18:05.939 "claim_type": "exclusive_write", 00:18:05.939 "zoned": false, 00:18:05.939 "supported_io_types": { 00:18:05.939 "read": true, 00:18:05.939 "write": true, 00:18:05.939 "unmap": true, 00:18:05.939 "flush": true, 00:18:05.939 "reset": true, 00:18:05.939 "nvme_admin": false, 00:18:05.939 "nvme_io": false, 00:18:05.939 "nvme_io_md": false, 00:18:05.939 "write_zeroes": true, 00:18:05.939 "zcopy": true, 00:18:05.939 "get_zone_info": false, 00:18:05.939 "zone_management": false, 00:18:05.939 "zone_append": false, 00:18:05.939 "compare": false, 00:18:05.939 "compare_and_write": false, 00:18:05.939 "abort": true, 00:18:05.939 "seek_hole": false, 00:18:05.939 "seek_data": false, 00:18:05.939 "copy": true, 00:18:05.939 "nvme_iov_md": false 00:18:05.939 }, 00:18:05.939 "memory_domains": [ 00:18:05.939 { 00:18:05.939 "dma_device_id": "system", 00:18:05.939 "dma_device_type": 1 00:18:05.939 }, 00:18:05.939 { 00:18:05.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.939 "dma_device_type": 2 00:18:05.939 } 00:18:05.940 ], 00:18:05.940 "driver_specific": {} 00:18:05.940 } 00:18:05.940 ] 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.940 "name": "Existed_Raid", 00:18:05.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.940 "strip_size_kb": 0, 00:18:05.940 "state": "configuring", 00:18:05.940 "raid_level": "raid1", 00:18:05.940 "superblock": false, 00:18:05.940 "num_base_bdevs": 2, 00:18:05.940 "num_base_bdevs_discovered": 1, 00:18:05.940 "num_base_bdevs_operational": 2, 00:18:05.940 "base_bdevs_list": [ 00:18:05.940 { 00:18:05.940 "name": "BaseBdev1", 00:18:05.940 "uuid": "6d045acb-2800-4588-8967-26484716cee6", 00:18:05.940 "is_configured": true, 00:18:05.940 "data_offset": 0, 00:18:05.940 "data_size": 65536 00:18:05.940 }, 00:18:05.940 { 00:18:05.940 "name": "BaseBdev2", 00:18:05.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.940 "is_configured": false, 00:18:05.940 "data_offset": 0, 00:18:05.940 "data_size": 0 00:18:05.940 } 00:18:05.940 ] 00:18:05.940 }' 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.940 00:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.508 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:06.768 [2024-07-25 00:44:29.218003] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:06.768 [2024-07-25 00:44:29.218079] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:06.768 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:06.768 [2024-07-25 00:44:29.414051] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.768 [2024-07-25 00:44:29.416008] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.768 [2024-07-25 00:44:29.416084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.027 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.301 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.301 "name": "Existed_Raid", 00:18:07.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.301 "strip_size_kb": 0, 00:18:07.301 "state": "configuring", 00:18:07.301 "raid_level": "raid1", 00:18:07.301 "superblock": false, 00:18:07.301 "num_base_bdevs": 2, 00:18:07.301 "num_base_bdevs_discovered": 1, 00:18:07.301 "num_base_bdevs_operational": 2, 00:18:07.301 "base_bdevs_list": [ 00:18:07.301 { 00:18:07.301 "name": "BaseBdev1", 00:18:07.301 "uuid": "6d045acb-2800-4588-8967-26484716cee6", 00:18:07.301 "is_configured": true, 00:18:07.301 "data_offset": 0, 00:18:07.301 "data_size": 65536 00:18:07.301 }, 00:18:07.301 { 00:18:07.301 "name": "BaseBdev2", 00:18:07.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.301 "is_configured": false, 00:18:07.301 "data_offset": 0, 00:18:07.301 "data_size": 0 00:18:07.301 } 00:18:07.301 ] 00:18:07.301 }' 00:18:07.301 00:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.301 00:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.870 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:07.870 [2024-07-25 00:44:30.444715] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:07.870 [2024-07-25 00:44:30.444779] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:07.870 [2024-07-25 00:44:30.444788] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:07.870 [2024-07-25 00:44:30.444918] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:07.870 [2024-07-25 00:44:30.445258] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:07.870 [2024-07-25 00:44:30.445270] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:07.870 [2024-07-25 00:44:30.445516] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.870 BaseBdev2 00:18:07.870 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:07.870 00:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:07.870 00:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:07.870 00:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:07.870 00:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:07.870 00:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:07.870 00:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.129 00:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:08.389 [ 00:18:08.389 { 00:18:08.389 "name": "BaseBdev2", 00:18:08.389 "aliases": [ 00:18:08.389 "d62af279-639b-4ab1-ae63-015fd4bc188b" 00:18:08.389 ], 00:18:08.389 "product_name": "Malloc disk", 00:18:08.389 "block_size": 512, 00:18:08.389 "num_blocks": 65536, 00:18:08.389 "uuid": "d62af279-639b-4ab1-ae63-015fd4bc188b", 00:18:08.389 "assigned_rate_limits": { 00:18:08.389 "rw_ios_per_sec": 0, 00:18:08.389 "rw_mbytes_per_sec": 0, 00:18:08.389 "r_mbytes_per_sec": 0, 00:18:08.389 "w_mbytes_per_sec": 0 00:18:08.389 }, 00:18:08.389 "claimed": true, 00:18:08.389 "claim_type": "exclusive_write", 00:18:08.389 "zoned": false, 00:18:08.389 "supported_io_types": { 00:18:08.389 "read": true, 00:18:08.389 "write": true, 00:18:08.389 "unmap": true, 00:18:08.389 "flush": true, 00:18:08.389 "reset": true, 00:18:08.389 "nvme_admin": false, 00:18:08.389 "nvme_io": false, 00:18:08.389 "nvme_io_md": false, 00:18:08.389 "write_zeroes": true, 00:18:08.389 "zcopy": true, 00:18:08.389 "get_zone_info": false, 00:18:08.389 "zone_management": false, 00:18:08.389 "zone_append": false, 00:18:08.389 "compare": false, 00:18:08.389 "compare_and_write": false, 00:18:08.389 "abort": true, 00:18:08.389 "seek_hole": false, 00:18:08.389 "seek_data": false, 00:18:08.389 "copy": true, 00:18:08.389 "nvme_iov_md": false 00:18:08.389 }, 00:18:08.389 "memory_domains": [ 00:18:08.389 { 00:18:08.389 "dma_device_id": "system", 00:18:08.389 "dma_device_type": 1 00:18:08.389 }, 00:18:08.389 { 00:18:08.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.389 "dma_device_type": 2 00:18:08.389 } 00:18:08.389 ], 00:18:08.389 "driver_specific": {} 00:18:08.389 } 00:18:08.389 ] 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.389 00:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.649 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.649 "name": "Existed_Raid", 00:18:08.649 "uuid": "dfd079a1-ef82-4249-9f28-2367e9259e0f", 00:18:08.649 "strip_size_kb": 0, 00:18:08.649 "state": "online", 00:18:08.649 "raid_level": "raid1", 00:18:08.649 "superblock": false, 00:18:08.649 "num_base_bdevs": 2, 00:18:08.649 "num_base_bdevs_discovered": 2, 00:18:08.649 "num_base_bdevs_operational": 2, 00:18:08.649 "base_bdevs_list": [ 00:18:08.649 { 00:18:08.649 "name": "BaseBdev1", 00:18:08.649 "uuid": "6d045acb-2800-4588-8967-26484716cee6", 00:18:08.649 "is_configured": true, 00:18:08.649 "data_offset": 0, 00:18:08.649 "data_size": 65536 00:18:08.649 }, 00:18:08.649 { 00:18:08.649 "name": "BaseBdev2", 00:18:08.649 "uuid": "d62af279-639b-4ab1-ae63-015fd4bc188b", 00:18:08.649 "is_configured": true, 00:18:08.649 "data_offset": 0, 00:18:08.649 "data_size": 65536 00:18:08.649 } 00:18:08.649 ] 00:18:08.649 }' 00:18:08.649 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.649 00:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.217 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:09.217 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:09.217 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:09.217 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:09.217 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:09.217 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:09.217 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:09.217 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:09.217 [2024-07-25 00:44:31.833249] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.217 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:09.217 "name": "Existed_Raid", 00:18:09.217 "aliases": [ 00:18:09.217 "dfd079a1-ef82-4249-9f28-2367e9259e0f" 00:18:09.217 ], 00:18:09.217 "product_name": "Raid Volume", 00:18:09.217 "block_size": 512, 00:18:09.217 "num_blocks": 65536, 00:18:09.217 "uuid": "dfd079a1-ef82-4249-9f28-2367e9259e0f", 00:18:09.217 "assigned_rate_limits": { 00:18:09.217 "rw_ios_per_sec": 0, 00:18:09.217 "rw_mbytes_per_sec": 0, 00:18:09.217 "r_mbytes_per_sec": 0, 00:18:09.217 "w_mbytes_per_sec": 0 00:18:09.217 }, 00:18:09.217 "claimed": false, 00:18:09.217 "zoned": false, 00:18:09.217 "supported_io_types": { 00:18:09.217 "read": true, 00:18:09.217 "write": true, 00:18:09.217 "unmap": false, 00:18:09.217 "flush": false, 00:18:09.217 "reset": true, 00:18:09.217 "nvme_admin": false, 00:18:09.217 "nvme_io": false, 00:18:09.217 "nvme_io_md": false, 00:18:09.217 "write_zeroes": true, 00:18:09.217 "zcopy": false, 00:18:09.217 "get_zone_info": false, 00:18:09.217 "zone_management": false, 00:18:09.217 "zone_append": false, 00:18:09.217 "compare": false, 00:18:09.217 "compare_and_write": false, 00:18:09.217 "abort": false, 00:18:09.217 "seek_hole": false, 00:18:09.217 "seek_data": false, 00:18:09.217 "copy": false, 00:18:09.217 "nvme_iov_md": false 00:18:09.217 }, 00:18:09.217 "memory_domains": [ 00:18:09.217 { 00:18:09.217 "dma_device_id": "system", 00:18:09.217 "dma_device_type": 1 00:18:09.217 }, 00:18:09.217 { 00:18:09.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.217 "dma_device_type": 2 00:18:09.217 }, 00:18:09.217 { 00:18:09.217 "dma_device_id": "system", 00:18:09.217 "dma_device_type": 1 00:18:09.217 }, 00:18:09.217 { 00:18:09.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.217 "dma_device_type": 2 00:18:09.217 } 00:18:09.217 ], 00:18:09.217 "driver_specific": { 00:18:09.217 "raid": { 00:18:09.217 "uuid": "dfd079a1-ef82-4249-9f28-2367e9259e0f", 00:18:09.217 "strip_size_kb": 0, 00:18:09.217 "state": "online", 00:18:09.217 "raid_level": "raid1", 00:18:09.217 "superblock": false, 00:18:09.217 "num_base_bdevs": 2, 00:18:09.217 "num_base_bdevs_discovered": 2, 00:18:09.217 "num_base_bdevs_operational": 2, 00:18:09.217 "base_bdevs_list": [ 00:18:09.217 { 00:18:09.217 "name": "BaseBdev1", 00:18:09.217 "uuid": "6d045acb-2800-4588-8967-26484716cee6", 00:18:09.217 "is_configured": true, 00:18:09.217 "data_offset": 0, 00:18:09.217 "data_size": 65536 00:18:09.217 }, 00:18:09.217 { 00:18:09.217 "name": "BaseBdev2", 00:18:09.217 "uuid": "d62af279-639b-4ab1-ae63-015fd4bc188b", 00:18:09.217 "is_configured": true, 00:18:09.217 "data_offset": 0, 00:18:09.217 "data_size": 65536 00:18:09.217 } 00:18:09.217 ] 00:18:09.218 } 00:18:09.218 } 00:18:09.218 }' 00:18:09.218 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:09.476 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:09.476 BaseBdev2' 00:18:09.476 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:09.476 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:09.476 00:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:09.734 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:09.734 "name": "BaseBdev1", 00:18:09.734 "aliases": [ 00:18:09.734 "6d045acb-2800-4588-8967-26484716cee6" 00:18:09.734 ], 00:18:09.734 "product_name": "Malloc disk", 00:18:09.734 "block_size": 512, 00:18:09.734 "num_blocks": 65536, 00:18:09.734 "uuid": "6d045acb-2800-4588-8967-26484716cee6", 00:18:09.734 "assigned_rate_limits": { 00:18:09.734 "rw_ios_per_sec": 0, 00:18:09.734 "rw_mbytes_per_sec": 0, 00:18:09.734 "r_mbytes_per_sec": 0, 00:18:09.734 "w_mbytes_per_sec": 0 00:18:09.735 }, 00:18:09.735 "claimed": true, 00:18:09.735 "claim_type": "exclusive_write", 00:18:09.735 "zoned": false, 00:18:09.735 "supported_io_types": { 00:18:09.735 "read": true, 00:18:09.735 "write": true, 00:18:09.735 "unmap": true, 00:18:09.735 "flush": true, 00:18:09.735 "reset": true, 00:18:09.735 "nvme_admin": false, 00:18:09.735 "nvme_io": false, 00:18:09.735 "nvme_io_md": false, 00:18:09.735 "write_zeroes": true, 00:18:09.735 "zcopy": true, 00:18:09.735 "get_zone_info": false, 00:18:09.735 "zone_management": false, 00:18:09.735 "zone_append": false, 00:18:09.735 "compare": false, 00:18:09.735 "compare_and_write": false, 00:18:09.735 "abort": true, 00:18:09.735 "seek_hole": false, 00:18:09.735 "seek_data": false, 00:18:09.735 "copy": true, 00:18:09.735 "nvme_iov_md": false 00:18:09.735 }, 00:18:09.735 "memory_domains": [ 00:18:09.735 { 00:18:09.735 "dma_device_id": "system", 00:18:09.735 "dma_device_type": 1 00:18:09.735 }, 00:18:09.735 { 00:18:09.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.735 "dma_device_type": 2 00:18:09.735 } 00:18:09.735 ], 00:18:09.735 "driver_specific": {} 00:18:09.735 }' 00:18:09.735 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:09.735 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:09.735 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:09.735 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:09.735 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:09.735 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:09.735 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:09.735 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:09.735 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:09.993 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:09.993 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:09.993 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:09.993 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:09.993 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:09.993 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:10.252 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:10.252 "name": "BaseBdev2", 00:18:10.252 "aliases": [ 00:18:10.252 "d62af279-639b-4ab1-ae63-015fd4bc188b" 00:18:10.252 ], 00:18:10.252 "product_name": "Malloc disk", 00:18:10.252 "block_size": 512, 00:18:10.252 "num_blocks": 65536, 00:18:10.252 "uuid": "d62af279-639b-4ab1-ae63-015fd4bc188b", 00:18:10.252 "assigned_rate_limits": { 00:18:10.252 "rw_ios_per_sec": 0, 00:18:10.252 "rw_mbytes_per_sec": 0, 00:18:10.252 "r_mbytes_per_sec": 0, 00:18:10.252 "w_mbytes_per_sec": 0 00:18:10.252 }, 00:18:10.252 "claimed": true, 00:18:10.252 "claim_type": "exclusive_write", 00:18:10.252 "zoned": false, 00:18:10.252 "supported_io_types": { 00:18:10.252 "read": true, 00:18:10.252 "write": true, 00:18:10.252 "unmap": true, 00:18:10.252 "flush": true, 00:18:10.252 "reset": true, 00:18:10.252 "nvme_admin": false, 00:18:10.252 "nvme_io": false, 00:18:10.252 "nvme_io_md": false, 00:18:10.252 "write_zeroes": true, 00:18:10.252 "zcopy": true, 00:18:10.252 "get_zone_info": false, 00:18:10.252 "zone_management": false, 00:18:10.252 "zone_append": false, 00:18:10.252 "compare": false, 00:18:10.252 "compare_and_write": false, 00:18:10.252 "abort": true, 00:18:10.252 "seek_hole": false, 00:18:10.252 "seek_data": false, 00:18:10.252 "copy": true, 00:18:10.252 "nvme_iov_md": false 00:18:10.252 }, 00:18:10.252 "memory_domains": [ 00:18:10.252 { 00:18:10.252 "dma_device_id": "system", 00:18:10.252 "dma_device_type": 1 00:18:10.252 }, 00:18:10.252 { 00:18:10.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.252 "dma_device_type": 2 00:18:10.252 } 00:18:10.252 ], 00:18:10.252 "driver_specific": {} 00:18:10.252 }' 00:18:10.252 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.252 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.252 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:10.252 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.511 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.511 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:10.511 00:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.511 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.511 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:10.511 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.511 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.769 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:10.769 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:11.033 [2024-07-25 00:44:33.429450] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.033 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.304 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.304 "name": "Existed_Raid", 00:18:11.304 "uuid": "dfd079a1-ef82-4249-9f28-2367e9259e0f", 00:18:11.304 "strip_size_kb": 0, 00:18:11.304 "state": "online", 00:18:11.304 "raid_level": "raid1", 00:18:11.304 "superblock": false, 00:18:11.304 "num_base_bdevs": 2, 00:18:11.304 "num_base_bdevs_discovered": 1, 00:18:11.304 "num_base_bdevs_operational": 1, 00:18:11.304 "base_bdevs_list": [ 00:18:11.304 { 00:18:11.304 "name": null, 00:18:11.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.304 "is_configured": false, 00:18:11.304 "data_offset": 0, 00:18:11.304 "data_size": 65536 00:18:11.304 }, 00:18:11.304 { 00:18:11.304 "name": "BaseBdev2", 00:18:11.304 "uuid": "d62af279-639b-4ab1-ae63-015fd4bc188b", 00:18:11.304 "is_configured": true, 00:18:11.304 "data_offset": 0, 00:18:11.304 "data_size": 65536 00:18:11.304 } 00:18:11.304 ] 00:18:11.304 }' 00:18:11.304 00:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.304 00:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.870 00:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:11.870 00:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:11.871 00:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.871 00:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:12.437 00:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:12.437 00:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:12.437 00:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:12.437 [2024-07-25 00:44:35.063503] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:12.437 [2024-07-25 00:44:35.063605] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.696 [2024-07-25 00:44:35.166863] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.696 [2024-07-25 00:44:35.166920] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.696 [2024-07-25 00:44:35.166930] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:12.696 00:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:12.696 00:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:12.696 00:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.696 00:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 124603 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 124603 ']' 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 124603 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124603 00:18:12.955 killing process with pid 124603 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124603' 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 124603 00:18:12.955 [2024-07-25 00:44:35.500885] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:12.955 00:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 124603 00:18:12.955 [2024-07-25 00:44:35.501026] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:14.445 00:18:14.445 real 0m11.647s 00:18:14.445 user 0m19.898s 00:18:14.445 sys 0m1.640s 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.445 ************************************ 00:18:14.445 END TEST raid_state_function_test 00:18:14.445 ************************************ 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.445 00:44:36 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:18:14.445 00:44:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:14.445 00:44:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.445 00:44:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:14.445 ************************************ 00:18:14.445 START TEST raid_state_function_test_sb 00:18:14.445 ************************************ 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:14.445 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=124979 00:18:14.446 Process raid pid: 124979 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 124979' 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 124979 /var/tmp/spdk-raid.sock 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 124979 ']' 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.446 00:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.446 [2024-07-25 00:44:37.036297] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:18:14.446 [2024-07-25 00:44:37.036623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.705 [2024-07-25 00:44:37.223069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.964 [2024-07-25 00:44:37.424037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.222 [2024-07-25 00:44:37.627316] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.481 00:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.481 00:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:15.481 00:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:15.741 [2024-07-25 00:44:38.224865] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:15.741 [2024-07-25 00:44:38.224954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:15.741 [2024-07-25 00:44:38.224966] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.741 [2024-07-25 00:44:38.224999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.741 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.999 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.999 "name": "Existed_Raid", 00:18:15.999 "uuid": "e190dcc0-5140-41c7-a240-399b9e2cc68a", 00:18:15.999 "strip_size_kb": 0, 00:18:15.999 "state": "configuring", 00:18:15.999 "raid_level": "raid1", 00:18:15.999 "superblock": true, 00:18:15.999 "num_base_bdevs": 2, 00:18:15.999 "num_base_bdevs_discovered": 0, 00:18:15.999 "num_base_bdevs_operational": 2, 00:18:15.999 "base_bdevs_list": [ 00:18:15.999 { 00:18:15.999 "name": "BaseBdev1", 00:18:15.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.999 "is_configured": false, 00:18:15.999 "data_offset": 0, 00:18:15.999 "data_size": 0 00:18:15.999 }, 00:18:15.999 { 00:18:15.999 "name": "BaseBdev2", 00:18:15.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.999 "is_configured": false, 00:18:15.999 "data_offset": 0, 00:18:15.999 "data_size": 0 00:18:15.999 } 00:18:15.999 ] 00:18:15.999 }' 00:18:15.999 00:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.999 00:44:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.567 00:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:16.827 [2024-07-25 00:44:39.228917] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.827 [2024-07-25 00:44:39.228957] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:16.827 00:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:17.086 [2024-07-25 00:44:39.497006] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:17.086 [2024-07-25 00:44:39.497075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:17.086 [2024-07-25 00:44:39.497085] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.086 [2024-07-25 00:44:39.497107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.086 00:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:17.086 [2024-07-25 00:44:39.719350] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.086 BaseBdev1 00:18:17.086 00:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:17.086 00:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:17.086 00:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:17.086 00:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:17.086 00:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:17.086 00:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:17.086 00:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:17.345 00:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:17.604 [ 00:18:17.604 { 00:18:17.604 "name": "BaseBdev1", 00:18:17.604 "aliases": [ 00:18:17.604 "465fa25d-4872-4c7a-95a4-6f4bf24f6e6f" 00:18:17.604 ], 00:18:17.604 "product_name": "Malloc disk", 00:18:17.604 "block_size": 512, 00:18:17.604 "num_blocks": 65536, 00:18:17.604 "uuid": "465fa25d-4872-4c7a-95a4-6f4bf24f6e6f", 00:18:17.604 "assigned_rate_limits": { 00:18:17.604 "rw_ios_per_sec": 0, 00:18:17.604 "rw_mbytes_per_sec": 0, 00:18:17.604 "r_mbytes_per_sec": 0, 00:18:17.604 "w_mbytes_per_sec": 0 00:18:17.604 }, 00:18:17.604 "claimed": true, 00:18:17.604 "claim_type": "exclusive_write", 00:18:17.604 "zoned": false, 00:18:17.604 "supported_io_types": { 00:18:17.604 "read": true, 00:18:17.604 "write": true, 00:18:17.604 "unmap": true, 00:18:17.604 "flush": true, 00:18:17.604 "reset": true, 00:18:17.604 "nvme_admin": false, 00:18:17.604 "nvme_io": false, 00:18:17.604 "nvme_io_md": false, 00:18:17.604 "write_zeroes": true, 00:18:17.604 "zcopy": true, 00:18:17.604 "get_zone_info": false, 00:18:17.604 "zone_management": false, 00:18:17.604 "zone_append": false, 00:18:17.604 "compare": false, 00:18:17.604 "compare_and_write": false, 00:18:17.604 "abort": true, 00:18:17.604 "seek_hole": false, 00:18:17.604 "seek_data": false, 00:18:17.604 "copy": true, 00:18:17.604 "nvme_iov_md": false 00:18:17.604 }, 00:18:17.604 "memory_domains": [ 00:18:17.604 { 00:18:17.604 "dma_device_id": "system", 00:18:17.604 "dma_device_type": 1 00:18:17.604 }, 00:18:17.604 { 00:18:17.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.604 "dma_device_type": 2 00:18:17.604 } 00:18:17.604 ], 00:18:17.604 "driver_specific": {} 00:18:17.604 } 00:18:17.604 ] 00:18:17.604 00:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:17.604 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:17.604 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:17.604 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:17.604 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:17.604 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:17.605 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:17.605 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.605 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.605 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.605 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.605 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.605 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.864 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.864 "name": "Existed_Raid", 00:18:17.864 "uuid": "9cc6a3ae-3976-4c82-8103-d6a27f41819c", 00:18:17.864 "strip_size_kb": 0, 00:18:17.864 "state": "configuring", 00:18:17.864 "raid_level": "raid1", 00:18:17.864 "superblock": true, 00:18:17.864 "num_base_bdevs": 2, 00:18:17.864 "num_base_bdevs_discovered": 1, 00:18:17.864 "num_base_bdevs_operational": 2, 00:18:17.864 "base_bdevs_list": [ 00:18:17.864 { 00:18:17.864 "name": "BaseBdev1", 00:18:17.864 "uuid": "465fa25d-4872-4c7a-95a4-6f4bf24f6e6f", 00:18:17.864 "is_configured": true, 00:18:17.864 "data_offset": 2048, 00:18:17.864 "data_size": 63488 00:18:17.864 }, 00:18:17.864 { 00:18:17.864 "name": "BaseBdev2", 00:18:17.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.864 "is_configured": false, 00:18:17.864 "data_offset": 0, 00:18:17.864 "data_size": 0 00:18:17.864 } 00:18:17.864 ] 00:18:17.864 }' 00:18:17.864 00:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.864 00:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.430 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:18.687 [2024-07-25 00:44:41.323730] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.687 [2024-07-25 00:44:41.323809] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:18.945 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:18.945 [2024-07-25 00:44:41.587888] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.945 [2024-07-25 00:44:41.590662] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.945 [2024-07-25 00:44:41.590735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.202 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.460 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.460 "name": "Existed_Raid", 00:18:19.460 "uuid": "9c324b30-a812-4b84-8104-8c51221966ce", 00:18:19.460 "strip_size_kb": 0, 00:18:19.460 "state": "configuring", 00:18:19.460 "raid_level": "raid1", 00:18:19.460 "superblock": true, 00:18:19.460 "num_base_bdevs": 2, 00:18:19.460 "num_base_bdevs_discovered": 1, 00:18:19.460 "num_base_bdevs_operational": 2, 00:18:19.460 "base_bdevs_list": [ 00:18:19.460 { 00:18:19.460 "name": "BaseBdev1", 00:18:19.460 "uuid": "465fa25d-4872-4c7a-95a4-6f4bf24f6e6f", 00:18:19.460 "is_configured": true, 00:18:19.460 "data_offset": 2048, 00:18:19.460 "data_size": 63488 00:18:19.460 }, 00:18:19.460 { 00:18:19.460 "name": "BaseBdev2", 00:18:19.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.460 "is_configured": false, 00:18:19.460 "data_offset": 0, 00:18:19.460 "data_size": 0 00:18:19.460 } 00:18:19.460 ] 00:18:19.460 }' 00:18:19.460 00:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.460 00:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.392 00:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:20.651 [2024-07-25 00:44:43.126790] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.651 [2024-07-25 00:44:43.127109] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:20.651 [2024-07-25 00:44:43.127131] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:20.651 [2024-07-25 00:44:43.127311] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:20.651 [2024-07-25 00:44:43.127785] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:20.651 [2024-07-25 00:44:43.127820] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:20.651 [2024-07-25 00:44:43.128043] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.651 BaseBdev2 00:18:20.651 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:20.651 00:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:20.651 00:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:20.651 00:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:20.651 00:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:20.651 00:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:20.651 00:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:20.907 00:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:21.163 [ 00:18:21.163 { 00:18:21.163 "name": "BaseBdev2", 00:18:21.163 "aliases": [ 00:18:21.163 "b28de414-57bf-4964-8f38-c34b7b3b0ff6" 00:18:21.163 ], 00:18:21.163 "product_name": "Malloc disk", 00:18:21.163 "block_size": 512, 00:18:21.163 "num_blocks": 65536, 00:18:21.163 "uuid": "b28de414-57bf-4964-8f38-c34b7b3b0ff6", 00:18:21.163 "assigned_rate_limits": { 00:18:21.163 "rw_ios_per_sec": 0, 00:18:21.163 "rw_mbytes_per_sec": 0, 00:18:21.163 "r_mbytes_per_sec": 0, 00:18:21.163 "w_mbytes_per_sec": 0 00:18:21.163 }, 00:18:21.163 "claimed": true, 00:18:21.163 "claim_type": "exclusive_write", 00:18:21.163 "zoned": false, 00:18:21.163 "supported_io_types": { 00:18:21.163 "read": true, 00:18:21.163 "write": true, 00:18:21.163 "unmap": true, 00:18:21.163 "flush": true, 00:18:21.163 "reset": true, 00:18:21.163 "nvme_admin": false, 00:18:21.163 "nvme_io": false, 00:18:21.163 "nvme_io_md": false, 00:18:21.163 "write_zeroes": true, 00:18:21.163 "zcopy": true, 00:18:21.163 "get_zone_info": false, 00:18:21.163 "zone_management": false, 00:18:21.163 "zone_append": false, 00:18:21.163 "compare": false, 00:18:21.163 "compare_and_write": false, 00:18:21.163 "abort": true, 00:18:21.163 "seek_hole": false, 00:18:21.163 "seek_data": false, 00:18:21.163 "copy": true, 00:18:21.163 "nvme_iov_md": false 00:18:21.164 }, 00:18:21.164 "memory_domains": [ 00:18:21.164 { 00:18:21.164 "dma_device_id": "system", 00:18:21.164 "dma_device_type": 1 00:18:21.164 }, 00:18:21.164 { 00:18:21.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.164 "dma_device_type": 2 00:18:21.164 } 00:18:21.164 ], 00:18:21.164 "driver_specific": {} 00:18:21.164 } 00:18:21.164 ] 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.164 00:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.420 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.420 "name": "Existed_Raid", 00:18:21.420 "uuid": "9c324b30-a812-4b84-8104-8c51221966ce", 00:18:21.420 "strip_size_kb": 0, 00:18:21.420 "state": "online", 00:18:21.420 "raid_level": "raid1", 00:18:21.420 "superblock": true, 00:18:21.420 "num_base_bdevs": 2, 00:18:21.420 "num_base_bdevs_discovered": 2, 00:18:21.420 "num_base_bdevs_operational": 2, 00:18:21.420 "base_bdevs_list": [ 00:18:21.420 { 00:18:21.420 "name": "BaseBdev1", 00:18:21.420 "uuid": "465fa25d-4872-4c7a-95a4-6f4bf24f6e6f", 00:18:21.420 "is_configured": true, 00:18:21.420 "data_offset": 2048, 00:18:21.420 "data_size": 63488 00:18:21.420 }, 00:18:21.420 { 00:18:21.420 "name": "BaseBdev2", 00:18:21.420 "uuid": "b28de414-57bf-4964-8f38-c34b7b3b0ff6", 00:18:21.420 "is_configured": true, 00:18:21.420 "data_offset": 2048, 00:18:21.420 "data_size": 63488 00:18:21.420 } 00:18:21.420 ] 00:18:21.420 }' 00:18:21.420 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.420 00:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.983 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:21.983 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:21.983 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:21.983 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:21.983 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:21.983 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:21.983 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:21.983 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:22.241 [2024-07-25 00:44:44.659510] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.241 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:22.241 "name": "Existed_Raid", 00:18:22.241 "aliases": [ 00:18:22.241 "9c324b30-a812-4b84-8104-8c51221966ce" 00:18:22.241 ], 00:18:22.241 "product_name": "Raid Volume", 00:18:22.241 "block_size": 512, 00:18:22.241 "num_blocks": 63488, 00:18:22.241 "uuid": "9c324b30-a812-4b84-8104-8c51221966ce", 00:18:22.241 "assigned_rate_limits": { 00:18:22.241 "rw_ios_per_sec": 0, 00:18:22.241 "rw_mbytes_per_sec": 0, 00:18:22.241 "r_mbytes_per_sec": 0, 00:18:22.241 "w_mbytes_per_sec": 0 00:18:22.241 }, 00:18:22.241 "claimed": false, 00:18:22.241 "zoned": false, 00:18:22.241 "supported_io_types": { 00:18:22.241 "read": true, 00:18:22.241 "write": true, 00:18:22.241 "unmap": false, 00:18:22.241 "flush": false, 00:18:22.241 "reset": true, 00:18:22.241 "nvme_admin": false, 00:18:22.241 "nvme_io": false, 00:18:22.241 "nvme_io_md": false, 00:18:22.241 "write_zeroes": true, 00:18:22.241 "zcopy": false, 00:18:22.241 "get_zone_info": false, 00:18:22.241 "zone_management": false, 00:18:22.241 "zone_append": false, 00:18:22.241 "compare": false, 00:18:22.241 "compare_and_write": false, 00:18:22.241 "abort": false, 00:18:22.241 "seek_hole": false, 00:18:22.241 "seek_data": false, 00:18:22.241 "copy": false, 00:18:22.241 "nvme_iov_md": false 00:18:22.241 }, 00:18:22.241 "memory_domains": [ 00:18:22.241 { 00:18:22.241 "dma_device_id": "system", 00:18:22.241 "dma_device_type": 1 00:18:22.241 }, 00:18:22.241 { 00:18:22.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.241 "dma_device_type": 2 00:18:22.241 }, 00:18:22.241 { 00:18:22.241 "dma_device_id": "system", 00:18:22.241 "dma_device_type": 1 00:18:22.241 }, 00:18:22.241 { 00:18:22.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.241 "dma_device_type": 2 00:18:22.241 } 00:18:22.241 ], 00:18:22.241 "driver_specific": { 00:18:22.241 "raid": { 00:18:22.241 "uuid": "9c324b30-a812-4b84-8104-8c51221966ce", 00:18:22.241 "strip_size_kb": 0, 00:18:22.241 "state": "online", 00:18:22.241 "raid_level": "raid1", 00:18:22.241 "superblock": true, 00:18:22.241 "num_base_bdevs": 2, 00:18:22.241 "num_base_bdevs_discovered": 2, 00:18:22.241 "num_base_bdevs_operational": 2, 00:18:22.241 "base_bdevs_list": [ 00:18:22.241 { 00:18:22.241 "name": "BaseBdev1", 00:18:22.241 "uuid": "465fa25d-4872-4c7a-95a4-6f4bf24f6e6f", 00:18:22.241 "is_configured": true, 00:18:22.241 "data_offset": 2048, 00:18:22.241 "data_size": 63488 00:18:22.241 }, 00:18:22.241 { 00:18:22.241 "name": "BaseBdev2", 00:18:22.241 "uuid": "b28de414-57bf-4964-8f38-c34b7b3b0ff6", 00:18:22.241 "is_configured": true, 00:18:22.241 "data_offset": 2048, 00:18:22.241 "data_size": 63488 00:18:22.241 } 00:18:22.241 ] 00:18:22.241 } 00:18:22.241 } 00:18:22.241 }' 00:18:22.241 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:22.241 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:22.241 BaseBdev2' 00:18:22.241 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:22.241 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:22.241 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:22.499 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:22.499 "name": "BaseBdev1", 00:18:22.499 "aliases": [ 00:18:22.499 "465fa25d-4872-4c7a-95a4-6f4bf24f6e6f" 00:18:22.499 ], 00:18:22.499 "product_name": "Malloc disk", 00:18:22.499 "block_size": 512, 00:18:22.499 "num_blocks": 65536, 00:18:22.499 "uuid": "465fa25d-4872-4c7a-95a4-6f4bf24f6e6f", 00:18:22.499 "assigned_rate_limits": { 00:18:22.499 "rw_ios_per_sec": 0, 00:18:22.499 "rw_mbytes_per_sec": 0, 00:18:22.499 "r_mbytes_per_sec": 0, 00:18:22.499 "w_mbytes_per_sec": 0 00:18:22.499 }, 00:18:22.499 "claimed": true, 00:18:22.499 "claim_type": "exclusive_write", 00:18:22.499 "zoned": false, 00:18:22.499 "supported_io_types": { 00:18:22.499 "read": true, 00:18:22.499 "write": true, 00:18:22.499 "unmap": true, 00:18:22.499 "flush": true, 00:18:22.499 "reset": true, 00:18:22.499 "nvme_admin": false, 00:18:22.499 "nvme_io": false, 00:18:22.499 "nvme_io_md": false, 00:18:22.499 "write_zeroes": true, 00:18:22.499 "zcopy": true, 00:18:22.499 "get_zone_info": false, 00:18:22.499 "zone_management": false, 00:18:22.499 "zone_append": false, 00:18:22.499 "compare": false, 00:18:22.499 "compare_and_write": false, 00:18:22.499 "abort": true, 00:18:22.499 "seek_hole": false, 00:18:22.499 "seek_data": false, 00:18:22.499 "copy": true, 00:18:22.499 "nvme_iov_md": false 00:18:22.499 }, 00:18:22.499 "memory_domains": [ 00:18:22.499 { 00:18:22.499 "dma_device_id": "system", 00:18:22.499 "dma_device_type": 1 00:18:22.499 }, 00:18:22.499 { 00:18:22.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.499 "dma_device_type": 2 00:18:22.499 } 00:18:22.499 ], 00:18:22.499 "driver_specific": {} 00:18:22.499 }' 00:18:22.499 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.499 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.499 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:22.499 00:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.499 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.499 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:22.499 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.499 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.499 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:22.499 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.757 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.757 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:22.757 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:22.757 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:22.757 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:22.757 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:22.757 "name": "BaseBdev2", 00:18:22.757 "aliases": [ 00:18:22.757 "b28de414-57bf-4964-8f38-c34b7b3b0ff6" 00:18:22.757 ], 00:18:22.757 "product_name": "Malloc disk", 00:18:22.757 "block_size": 512, 00:18:22.757 "num_blocks": 65536, 00:18:22.757 "uuid": "b28de414-57bf-4964-8f38-c34b7b3b0ff6", 00:18:22.757 "assigned_rate_limits": { 00:18:22.757 "rw_ios_per_sec": 0, 00:18:22.757 "rw_mbytes_per_sec": 0, 00:18:22.757 "r_mbytes_per_sec": 0, 00:18:22.757 "w_mbytes_per_sec": 0 00:18:22.757 }, 00:18:22.757 "claimed": true, 00:18:22.757 "claim_type": "exclusive_write", 00:18:22.757 "zoned": false, 00:18:22.757 "supported_io_types": { 00:18:22.757 "read": true, 00:18:22.757 "write": true, 00:18:22.757 "unmap": true, 00:18:22.757 "flush": true, 00:18:22.757 "reset": true, 00:18:22.757 "nvme_admin": false, 00:18:22.757 "nvme_io": false, 00:18:22.757 "nvme_io_md": false, 00:18:22.757 "write_zeroes": true, 00:18:22.757 "zcopy": true, 00:18:22.757 "get_zone_info": false, 00:18:22.757 "zone_management": false, 00:18:22.757 "zone_append": false, 00:18:22.757 "compare": false, 00:18:22.757 "compare_and_write": false, 00:18:22.757 "abort": true, 00:18:22.757 "seek_hole": false, 00:18:22.757 "seek_data": false, 00:18:22.757 "copy": true, 00:18:22.757 "nvme_iov_md": false 00:18:22.757 }, 00:18:22.757 "memory_domains": [ 00:18:22.757 { 00:18:22.757 "dma_device_id": "system", 00:18:22.757 "dma_device_type": 1 00:18:22.757 }, 00:18:22.757 { 00:18:22.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.757 "dma_device_type": 2 00:18:22.757 } 00:18:22.757 ], 00:18:22.757 "driver_specific": {} 00:18:22.757 }' 00:18:22.757 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:23.015 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:23.015 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:23.015 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:23.015 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:23.015 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:23.015 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:23.015 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:23.015 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:23.015 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:23.272 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:23.272 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:23.272 00:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:23.273 [2024-07-25 00:44:45.891657] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.530 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.788 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.788 "name": "Existed_Raid", 00:18:23.788 "uuid": "9c324b30-a812-4b84-8104-8c51221966ce", 00:18:23.788 "strip_size_kb": 0, 00:18:23.788 "state": "online", 00:18:23.788 "raid_level": "raid1", 00:18:23.788 "superblock": true, 00:18:23.788 "num_base_bdevs": 2, 00:18:23.788 "num_base_bdevs_discovered": 1, 00:18:23.788 "num_base_bdevs_operational": 1, 00:18:23.788 "base_bdevs_list": [ 00:18:23.788 { 00:18:23.788 "name": null, 00:18:23.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.788 "is_configured": false, 00:18:23.788 "data_offset": 2048, 00:18:23.788 "data_size": 63488 00:18:23.788 }, 00:18:23.788 { 00:18:23.788 "name": "BaseBdev2", 00:18:23.788 "uuid": "b28de414-57bf-4964-8f38-c34b7b3b0ff6", 00:18:23.788 "is_configured": true, 00:18:23.788 "data_offset": 2048, 00:18:23.788 "data_size": 63488 00:18:23.788 } 00:18:23.788 ] 00:18:23.788 }' 00:18:23.788 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.788 00:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.354 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:24.354 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:24.354 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.354 00:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:24.612 00:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:24.612 00:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:24.612 00:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:24.870 [2024-07-25 00:44:47.264231] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:24.870 [2024-07-25 00:44:47.264349] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.870 [2024-07-25 00:44:47.368093] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.870 [2024-07-25 00:44:47.368148] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.870 [2024-07-25 00:44:47.368157] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:24.870 00:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:24.870 00:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:24.870 00:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.870 00:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 124979 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 124979 ']' 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 124979 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124979 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124979' 00:18:25.127 killing process with pid 124979 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 124979 00:18:25.127 [2024-07-25 00:44:47.710972] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.127 [2024-07-25 00:44:47.711106] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.127 00:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 124979 00:18:27.027 ************************************ 00:18:27.027 END TEST raid_state_function_test_sb 00:18:27.027 ************************************ 00:18:27.027 00:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:27.027 00:18:27.027 real 0m12.264s 00:18:27.027 user 0m20.831s 00:18:27.027 sys 0m1.791s 00:18:27.027 00:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:27.027 00:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.027 00:44:49 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:18:27.027 00:44:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:27.027 00:44:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:27.027 00:44:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.027 ************************************ 00:18:27.027 START TEST raid_superblock_test 00:18:27.027 ************************************ 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=125369 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 125369 /var/tmp/spdk-raid.sock 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 125369 ']' 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.027 00:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.027 [2024-07-25 00:44:49.348862] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:18:27.027 [2024-07-25 00:44:49.349094] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125369 ] 00:18:27.027 [2024-07-25 00:44:49.535940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.285 [2024-07-25 00:44:49.816324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.543 [2024-07-25 00:44:50.027122] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:27.801 malloc1 00:18:27.801 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:28.059 [2024-07-25 00:44:50.689157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:28.059 [2024-07-25 00:44:50.689277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.059 [2024-07-25 00:44:50.689319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:28.059 [2024-07-25 00:44:50.689338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.059 [2024-07-25 00:44:50.691700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.059 [2024-07-25 00:44:50.691748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:28.059 pt1 00:18:28.317 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:28.317 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:28.317 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:28.317 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:28.317 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:28.317 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.317 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.317 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.317 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:28.317 malloc2 00:18:28.317 00:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.575 [2024-07-25 00:44:51.162616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.575 [2024-07-25 00:44:51.162756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.575 [2024-07-25 00:44:51.162807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:28.575 [2024-07-25 00:44:51.162830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.575 [2024-07-25 00:44:51.165519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.575 [2024-07-25 00:44:51.165575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.575 pt2 00:18:28.575 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:28.575 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:28.575 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:28.833 [2024-07-25 00:44:51.390728] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:28.833 [2024-07-25 00:44:51.393046] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.833 [2024-07-25 00:44:51.393247] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:18:28.833 [2024-07-25 00:44:51.393260] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:28.833 [2024-07-25 00:44:51.393402] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:28.833 [2024-07-25 00:44:51.393791] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:18:28.833 [2024-07-25 00:44:51.393815] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:18:28.833 [2024-07-25 00:44:51.393996] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.833 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.097 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.097 "name": "raid_bdev1", 00:18:29.097 "uuid": "cd03d682-8154-41f7-85ee-fa036f1758c5", 00:18:29.097 "strip_size_kb": 0, 00:18:29.097 "state": "online", 00:18:29.097 "raid_level": "raid1", 00:18:29.097 "superblock": true, 00:18:29.097 "num_base_bdevs": 2, 00:18:29.097 "num_base_bdevs_discovered": 2, 00:18:29.097 "num_base_bdevs_operational": 2, 00:18:29.097 "base_bdevs_list": [ 00:18:29.097 { 00:18:29.097 "name": "pt1", 00:18:29.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.097 "is_configured": true, 00:18:29.097 "data_offset": 2048, 00:18:29.097 "data_size": 63488 00:18:29.097 }, 00:18:29.097 { 00:18:29.097 "name": "pt2", 00:18:29.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.097 "is_configured": true, 00:18:29.097 "data_offset": 2048, 00:18:29.097 "data_size": 63488 00:18:29.097 } 00:18:29.097 ] 00:18:29.097 }' 00:18:29.097 00:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.097 00:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.058 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:30.058 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:30.059 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:30.059 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:30.059 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:30.059 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:30.059 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:30.059 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:30.059 [2024-07-25 00:44:52.647486] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.059 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:30.059 "name": "raid_bdev1", 00:18:30.059 "aliases": [ 00:18:30.059 "cd03d682-8154-41f7-85ee-fa036f1758c5" 00:18:30.059 ], 00:18:30.059 "product_name": "Raid Volume", 00:18:30.059 "block_size": 512, 00:18:30.059 "num_blocks": 63488, 00:18:30.059 "uuid": "cd03d682-8154-41f7-85ee-fa036f1758c5", 00:18:30.059 "assigned_rate_limits": { 00:18:30.059 "rw_ios_per_sec": 0, 00:18:30.059 "rw_mbytes_per_sec": 0, 00:18:30.059 "r_mbytes_per_sec": 0, 00:18:30.059 "w_mbytes_per_sec": 0 00:18:30.059 }, 00:18:30.059 "claimed": false, 00:18:30.059 "zoned": false, 00:18:30.059 "supported_io_types": { 00:18:30.059 "read": true, 00:18:30.059 "write": true, 00:18:30.059 "unmap": false, 00:18:30.059 "flush": false, 00:18:30.059 "reset": true, 00:18:30.059 "nvme_admin": false, 00:18:30.059 "nvme_io": false, 00:18:30.059 "nvme_io_md": false, 00:18:30.059 "write_zeroes": true, 00:18:30.059 "zcopy": false, 00:18:30.059 "get_zone_info": false, 00:18:30.059 "zone_management": false, 00:18:30.059 "zone_append": false, 00:18:30.059 "compare": false, 00:18:30.059 "compare_and_write": false, 00:18:30.059 "abort": false, 00:18:30.059 "seek_hole": false, 00:18:30.059 "seek_data": false, 00:18:30.059 "copy": false, 00:18:30.059 "nvme_iov_md": false 00:18:30.059 }, 00:18:30.059 "memory_domains": [ 00:18:30.059 { 00:18:30.059 "dma_device_id": "system", 00:18:30.059 "dma_device_type": 1 00:18:30.059 }, 00:18:30.059 { 00:18:30.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.059 "dma_device_type": 2 00:18:30.059 }, 00:18:30.059 { 00:18:30.059 "dma_device_id": "system", 00:18:30.059 "dma_device_type": 1 00:18:30.059 }, 00:18:30.059 { 00:18:30.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.059 "dma_device_type": 2 00:18:30.059 } 00:18:30.059 ], 00:18:30.059 "driver_specific": { 00:18:30.059 "raid": { 00:18:30.059 "uuid": "cd03d682-8154-41f7-85ee-fa036f1758c5", 00:18:30.059 "strip_size_kb": 0, 00:18:30.059 "state": "online", 00:18:30.059 "raid_level": "raid1", 00:18:30.059 "superblock": true, 00:18:30.059 "num_base_bdevs": 2, 00:18:30.059 "num_base_bdevs_discovered": 2, 00:18:30.059 "num_base_bdevs_operational": 2, 00:18:30.059 "base_bdevs_list": [ 00:18:30.059 { 00:18:30.059 "name": "pt1", 00:18:30.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.059 "is_configured": true, 00:18:30.059 "data_offset": 2048, 00:18:30.059 "data_size": 63488 00:18:30.059 }, 00:18:30.059 { 00:18:30.059 "name": "pt2", 00:18:30.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.059 "is_configured": true, 00:18:30.059 "data_offset": 2048, 00:18:30.059 "data_size": 63488 00:18:30.059 } 00:18:30.059 ] 00:18:30.059 } 00:18:30.059 } 00:18:30.059 }' 00:18:30.059 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:30.317 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:30.317 pt2' 00:18:30.317 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:30.317 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:30.317 00:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:30.575 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:30.575 "name": "pt1", 00:18:30.575 "aliases": [ 00:18:30.575 "00000000-0000-0000-0000-000000000001" 00:18:30.575 ], 00:18:30.575 "product_name": "passthru", 00:18:30.575 "block_size": 512, 00:18:30.575 "num_blocks": 65536, 00:18:30.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.575 "assigned_rate_limits": { 00:18:30.575 "rw_ios_per_sec": 0, 00:18:30.575 "rw_mbytes_per_sec": 0, 00:18:30.575 "r_mbytes_per_sec": 0, 00:18:30.575 "w_mbytes_per_sec": 0 00:18:30.575 }, 00:18:30.575 "claimed": true, 00:18:30.575 "claim_type": "exclusive_write", 00:18:30.575 "zoned": false, 00:18:30.575 "supported_io_types": { 00:18:30.575 "read": true, 00:18:30.575 "write": true, 00:18:30.575 "unmap": true, 00:18:30.575 "flush": true, 00:18:30.575 "reset": true, 00:18:30.575 "nvme_admin": false, 00:18:30.575 "nvme_io": false, 00:18:30.575 "nvme_io_md": false, 00:18:30.575 "write_zeroes": true, 00:18:30.575 "zcopy": true, 00:18:30.575 "get_zone_info": false, 00:18:30.575 "zone_management": false, 00:18:30.576 "zone_append": false, 00:18:30.576 "compare": false, 00:18:30.576 "compare_and_write": false, 00:18:30.576 "abort": true, 00:18:30.576 "seek_hole": false, 00:18:30.576 "seek_data": false, 00:18:30.576 "copy": true, 00:18:30.576 "nvme_iov_md": false 00:18:30.576 }, 00:18:30.576 "memory_domains": [ 00:18:30.576 { 00:18:30.576 "dma_device_id": "system", 00:18:30.576 "dma_device_type": 1 00:18:30.576 }, 00:18:30.576 { 00:18:30.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.576 "dma_device_type": 2 00:18:30.576 } 00:18:30.576 ], 00:18:30.576 "driver_specific": { 00:18:30.576 "passthru": { 00:18:30.576 "name": "pt1", 00:18:30.576 "base_bdev_name": "malloc1" 00:18:30.576 } 00:18:30.576 } 00:18:30.576 }' 00:18:30.576 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:30.576 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:30.576 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:30.576 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.576 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.576 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:30.576 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.842 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.842 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:30.842 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.842 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.842 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:30.842 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:30.842 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:30.842 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:31.104 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:31.104 "name": "pt2", 00:18:31.104 "aliases": [ 00:18:31.104 "00000000-0000-0000-0000-000000000002" 00:18:31.104 ], 00:18:31.104 "product_name": "passthru", 00:18:31.104 "block_size": 512, 00:18:31.104 "num_blocks": 65536, 00:18:31.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.104 "assigned_rate_limits": { 00:18:31.104 "rw_ios_per_sec": 0, 00:18:31.104 "rw_mbytes_per_sec": 0, 00:18:31.104 "r_mbytes_per_sec": 0, 00:18:31.104 "w_mbytes_per_sec": 0 00:18:31.104 }, 00:18:31.104 "claimed": true, 00:18:31.104 "claim_type": "exclusive_write", 00:18:31.104 "zoned": false, 00:18:31.104 "supported_io_types": { 00:18:31.104 "read": true, 00:18:31.104 "write": true, 00:18:31.104 "unmap": true, 00:18:31.104 "flush": true, 00:18:31.104 "reset": true, 00:18:31.104 "nvme_admin": false, 00:18:31.104 "nvme_io": false, 00:18:31.104 "nvme_io_md": false, 00:18:31.104 "write_zeroes": true, 00:18:31.104 "zcopy": true, 00:18:31.104 "get_zone_info": false, 00:18:31.104 "zone_management": false, 00:18:31.104 "zone_append": false, 00:18:31.104 "compare": false, 00:18:31.104 "compare_and_write": false, 00:18:31.104 "abort": true, 00:18:31.104 "seek_hole": false, 00:18:31.104 "seek_data": false, 00:18:31.104 "copy": true, 00:18:31.104 "nvme_iov_md": false 00:18:31.104 }, 00:18:31.104 "memory_domains": [ 00:18:31.104 { 00:18:31.104 "dma_device_id": "system", 00:18:31.104 "dma_device_type": 1 00:18:31.104 }, 00:18:31.104 { 00:18:31.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.104 "dma_device_type": 2 00:18:31.104 } 00:18:31.104 ], 00:18:31.104 "driver_specific": { 00:18:31.104 "passthru": { 00:18:31.104 "name": "pt2", 00:18:31.104 "base_bdev_name": "malloc2" 00:18:31.104 } 00:18:31.104 } 00:18:31.104 }' 00:18:31.104 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.104 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.104 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:31.104 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.362 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.362 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:31.362 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.362 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.362 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:31.362 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.362 00:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.362 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:31.362 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:31.362 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:31.619 [2024-07-25 00:44:54.267729] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.876 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=cd03d682-8154-41f7-85ee-fa036f1758c5 00:18:31.876 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z cd03d682-8154-41f7-85ee-fa036f1758c5 ']' 00:18:31.876 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:31.876 [2024-07-25 00:44:54.511510] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.876 [2024-07-25 00:44:54.511541] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.876 [2024-07-25 00:44:54.511612] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.876 [2024-07-25 00:44:54.511673] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.876 [2024-07-25 00:44:54.511682] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:18:32.133 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.133 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:32.133 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:32.133 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:32.133 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:32.133 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:32.391 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:32.391 00:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:32.648 00:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:32.648 00:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:32.905 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:33.163 [2024-07-25 00:44:55.615704] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:33.163 [2024-07-25 00:44:55.617653] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:33.163 [2024-07-25 00:44:55.617737] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:33.163 [2024-07-25 00:44:55.617815] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:33.163 [2024-07-25 00:44:55.617843] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.163 [2024-07-25 00:44:55.617852] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:18:33.163 request: 00:18:33.163 { 00:18:33.163 "name": "raid_bdev1", 00:18:33.163 "raid_level": "raid1", 00:18:33.163 "base_bdevs": [ 00:18:33.163 "malloc1", 00:18:33.163 "malloc2" 00:18:33.163 ], 00:18:33.163 "superblock": false, 00:18:33.163 "method": "bdev_raid_create", 00:18:33.163 "req_id": 1 00:18:33.163 } 00:18:33.163 Got JSON-RPC error response 00:18:33.163 response: 00:18:33.163 { 00:18:33.163 "code": -17, 00:18:33.163 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:33.163 } 00:18:33.163 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:33.163 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:33.163 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:33.163 00:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:33.163 00:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.163 00:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:33.422 00:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:33.422 00:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:33.422 00:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:33.422 [2024-07-25 00:44:55.995782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:33.422 [2024-07-25 00:44:55.995893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.422 [2024-07-25 00:44:55.995946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:33.422 [2024-07-25 00:44:55.995972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.422 [2024-07-25 00:44:55.998212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.422 [2024-07-25 00:44:55.998287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:33.422 [2024-07-25 00:44:55.998404] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:33.422 [2024-07-25 00:44:55.998466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:33.422 pt1 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.422 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.679 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.679 "name": "raid_bdev1", 00:18:33.679 "uuid": "cd03d682-8154-41f7-85ee-fa036f1758c5", 00:18:33.679 "strip_size_kb": 0, 00:18:33.679 "state": "configuring", 00:18:33.679 "raid_level": "raid1", 00:18:33.679 "superblock": true, 00:18:33.679 "num_base_bdevs": 2, 00:18:33.679 "num_base_bdevs_discovered": 1, 00:18:33.679 "num_base_bdevs_operational": 2, 00:18:33.679 "base_bdevs_list": [ 00:18:33.679 { 00:18:33.679 "name": "pt1", 00:18:33.679 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:33.679 "is_configured": true, 00:18:33.679 "data_offset": 2048, 00:18:33.679 "data_size": 63488 00:18:33.679 }, 00:18:33.679 { 00:18:33.679 "name": null, 00:18:33.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.679 "is_configured": false, 00:18:33.679 "data_offset": 2048, 00:18:33.679 "data_size": 63488 00:18:33.679 } 00:18:33.679 ] 00:18:33.679 }' 00:18:33.679 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.679 00:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.245 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:18:34.245 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:34.245 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:34.245 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:34.502 [2024-07-25 00:44:56.987953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:34.502 [2024-07-25 00:44:56.988065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.502 [2024-07-25 00:44:56.988099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:34.502 [2024-07-25 00:44:56.988124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.502 [2024-07-25 00:44:56.988590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.502 [2024-07-25 00:44:56.988643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:34.502 [2024-07-25 00:44:56.988750] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:34.502 [2024-07-25 00:44:56.988771] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:34.502 [2024-07-25 00:44:56.988875] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:18:34.502 [2024-07-25 00:44:56.988891] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:34.502 [2024-07-25 00:44:56.989001] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:34.502 [2024-07-25 00:44:56.989284] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:18:34.502 [2024-07-25 00:44:56.989301] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:18:34.502 [2024-07-25 00:44:56.989439] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.502 pt2 00:18:34.502 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:34.502 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:34.502 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:34.502 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:34.502 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:34.502 00:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:34.502 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:34.502 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:34.502 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.502 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.502 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.502 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.502 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.502 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.759 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.759 "name": "raid_bdev1", 00:18:34.759 "uuid": "cd03d682-8154-41f7-85ee-fa036f1758c5", 00:18:34.759 "strip_size_kb": 0, 00:18:34.759 "state": "online", 00:18:34.759 "raid_level": "raid1", 00:18:34.759 "superblock": true, 00:18:34.759 "num_base_bdevs": 2, 00:18:34.759 "num_base_bdevs_discovered": 2, 00:18:34.760 "num_base_bdevs_operational": 2, 00:18:34.760 "base_bdevs_list": [ 00:18:34.760 { 00:18:34.760 "name": "pt1", 00:18:34.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.760 "is_configured": true, 00:18:34.760 "data_offset": 2048, 00:18:34.760 "data_size": 63488 00:18:34.760 }, 00:18:34.760 { 00:18:34.760 "name": "pt2", 00:18:34.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.760 "is_configured": true, 00:18:34.760 "data_offset": 2048, 00:18:34.760 "data_size": 63488 00:18:34.760 } 00:18:34.760 ] 00:18:34.760 }' 00:18:34.760 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.760 00:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.325 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:35.325 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:35.325 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:35.325 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:35.325 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:35.325 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:35.325 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:35.325 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:35.325 [2024-07-25 00:44:57.976354] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.584 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:35.584 "name": "raid_bdev1", 00:18:35.584 "aliases": [ 00:18:35.584 "cd03d682-8154-41f7-85ee-fa036f1758c5" 00:18:35.584 ], 00:18:35.584 "product_name": "Raid Volume", 00:18:35.584 "block_size": 512, 00:18:35.584 "num_blocks": 63488, 00:18:35.584 "uuid": "cd03d682-8154-41f7-85ee-fa036f1758c5", 00:18:35.584 "assigned_rate_limits": { 00:18:35.584 "rw_ios_per_sec": 0, 00:18:35.584 "rw_mbytes_per_sec": 0, 00:18:35.584 "r_mbytes_per_sec": 0, 00:18:35.584 "w_mbytes_per_sec": 0 00:18:35.584 }, 00:18:35.584 "claimed": false, 00:18:35.584 "zoned": false, 00:18:35.584 "supported_io_types": { 00:18:35.584 "read": true, 00:18:35.584 "write": true, 00:18:35.584 "unmap": false, 00:18:35.584 "flush": false, 00:18:35.584 "reset": true, 00:18:35.584 "nvme_admin": false, 00:18:35.584 "nvme_io": false, 00:18:35.584 "nvme_io_md": false, 00:18:35.584 "write_zeroes": true, 00:18:35.584 "zcopy": false, 00:18:35.584 "get_zone_info": false, 00:18:35.584 "zone_management": false, 00:18:35.584 "zone_append": false, 00:18:35.584 "compare": false, 00:18:35.584 "compare_and_write": false, 00:18:35.584 "abort": false, 00:18:35.584 "seek_hole": false, 00:18:35.584 "seek_data": false, 00:18:35.584 "copy": false, 00:18:35.584 "nvme_iov_md": false 00:18:35.584 }, 00:18:35.584 "memory_domains": [ 00:18:35.584 { 00:18:35.584 "dma_device_id": "system", 00:18:35.584 "dma_device_type": 1 00:18:35.584 }, 00:18:35.584 { 00:18:35.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.584 "dma_device_type": 2 00:18:35.584 }, 00:18:35.584 { 00:18:35.584 "dma_device_id": "system", 00:18:35.584 "dma_device_type": 1 00:18:35.584 }, 00:18:35.584 { 00:18:35.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.584 "dma_device_type": 2 00:18:35.584 } 00:18:35.584 ], 00:18:35.584 "driver_specific": { 00:18:35.584 "raid": { 00:18:35.584 "uuid": "cd03d682-8154-41f7-85ee-fa036f1758c5", 00:18:35.584 "strip_size_kb": 0, 00:18:35.584 "state": "online", 00:18:35.584 "raid_level": "raid1", 00:18:35.584 "superblock": true, 00:18:35.584 "num_base_bdevs": 2, 00:18:35.584 "num_base_bdevs_discovered": 2, 00:18:35.584 "num_base_bdevs_operational": 2, 00:18:35.584 "base_bdevs_list": [ 00:18:35.584 { 00:18:35.584 "name": "pt1", 00:18:35.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.584 "is_configured": true, 00:18:35.584 "data_offset": 2048, 00:18:35.584 "data_size": 63488 00:18:35.584 }, 00:18:35.584 { 00:18:35.584 "name": "pt2", 00:18:35.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.584 "is_configured": true, 00:18:35.584 "data_offset": 2048, 00:18:35.584 "data_size": 63488 00:18:35.584 } 00:18:35.584 ] 00:18:35.584 } 00:18:35.584 } 00:18:35.584 }' 00:18:35.584 00:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:35.584 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:35.584 pt2' 00:18:35.584 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:35.584 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:35.584 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:35.842 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:35.842 "name": "pt1", 00:18:35.842 "aliases": [ 00:18:35.842 "00000000-0000-0000-0000-000000000001" 00:18:35.842 ], 00:18:35.842 "product_name": "passthru", 00:18:35.842 "block_size": 512, 00:18:35.842 "num_blocks": 65536, 00:18:35.842 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.842 "assigned_rate_limits": { 00:18:35.842 "rw_ios_per_sec": 0, 00:18:35.842 "rw_mbytes_per_sec": 0, 00:18:35.842 "r_mbytes_per_sec": 0, 00:18:35.842 "w_mbytes_per_sec": 0 00:18:35.842 }, 00:18:35.842 "claimed": true, 00:18:35.842 "claim_type": "exclusive_write", 00:18:35.842 "zoned": false, 00:18:35.842 "supported_io_types": { 00:18:35.842 "read": true, 00:18:35.842 "write": true, 00:18:35.842 "unmap": true, 00:18:35.842 "flush": true, 00:18:35.842 "reset": true, 00:18:35.842 "nvme_admin": false, 00:18:35.842 "nvme_io": false, 00:18:35.842 "nvme_io_md": false, 00:18:35.842 "write_zeroes": true, 00:18:35.842 "zcopy": true, 00:18:35.842 "get_zone_info": false, 00:18:35.842 "zone_management": false, 00:18:35.842 "zone_append": false, 00:18:35.842 "compare": false, 00:18:35.842 "compare_and_write": false, 00:18:35.842 "abort": true, 00:18:35.842 "seek_hole": false, 00:18:35.842 "seek_data": false, 00:18:35.842 "copy": true, 00:18:35.842 "nvme_iov_md": false 00:18:35.842 }, 00:18:35.842 "memory_domains": [ 00:18:35.842 { 00:18:35.842 "dma_device_id": "system", 00:18:35.842 "dma_device_type": 1 00:18:35.842 }, 00:18:35.842 { 00:18:35.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.842 "dma_device_type": 2 00:18:35.842 } 00:18:35.842 ], 00:18:35.842 "driver_specific": { 00:18:35.842 "passthru": { 00:18:35.842 "name": "pt1", 00:18:35.842 "base_bdev_name": "malloc1" 00:18:35.842 } 00:18:35.842 } 00:18:35.842 }' 00:18:35.842 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:35.842 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:35.843 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:35.843 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.843 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.843 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:35.843 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.100 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.100 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:36.101 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.101 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.101 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:36.101 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:36.101 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:36.101 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:36.359 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:36.359 "name": "pt2", 00:18:36.359 "aliases": [ 00:18:36.359 "00000000-0000-0000-0000-000000000002" 00:18:36.359 ], 00:18:36.359 "product_name": "passthru", 00:18:36.359 "block_size": 512, 00:18:36.359 "num_blocks": 65536, 00:18:36.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.359 "assigned_rate_limits": { 00:18:36.359 "rw_ios_per_sec": 0, 00:18:36.359 "rw_mbytes_per_sec": 0, 00:18:36.359 "r_mbytes_per_sec": 0, 00:18:36.359 "w_mbytes_per_sec": 0 00:18:36.359 }, 00:18:36.359 "claimed": true, 00:18:36.359 "claim_type": "exclusive_write", 00:18:36.359 "zoned": false, 00:18:36.359 "supported_io_types": { 00:18:36.359 "read": true, 00:18:36.359 "write": true, 00:18:36.359 "unmap": true, 00:18:36.359 "flush": true, 00:18:36.359 "reset": true, 00:18:36.359 "nvme_admin": false, 00:18:36.359 "nvme_io": false, 00:18:36.359 "nvme_io_md": false, 00:18:36.359 "write_zeroes": true, 00:18:36.359 "zcopy": true, 00:18:36.359 "get_zone_info": false, 00:18:36.359 "zone_management": false, 00:18:36.359 "zone_append": false, 00:18:36.359 "compare": false, 00:18:36.359 "compare_and_write": false, 00:18:36.359 "abort": true, 00:18:36.359 "seek_hole": false, 00:18:36.359 "seek_data": false, 00:18:36.359 "copy": true, 00:18:36.359 "nvme_iov_md": false 00:18:36.359 }, 00:18:36.359 "memory_domains": [ 00:18:36.359 { 00:18:36.359 "dma_device_id": "system", 00:18:36.359 "dma_device_type": 1 00:18:36.359 }, 00:18:36.359 { 00:18:36.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.359 "dma_device_type": 2 00:18:36.359 } 00:18:36.359 ], 00:18:36.359 "driver_specific": { 00:18:36.359 "passthru": { 00:18:36.359 "name": "pt2", 00:18:36.359 "base_bdev_name": "malloc2" 00:18:36.359 } 00:18:36.359 } 00:18:36.359 }' 00:18:36.359 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.359 00:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.617 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:36.617 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.617 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.617 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:36.617 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.617 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.617 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:36.617 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.617 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.875 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:36.875 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:36.875 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:37.133 [2024-07-25 00:44:59.536664] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.133 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' cd03d682-8154-41f7-85ee-fa036f1758c5 '!=' cd03d682-8154-41f7-85ee-fa036f1758c5 ']' 00:18:37.133 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:37.133 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:37.133 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:37.133 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:37.420 [2024-07-25 00:44:59.790809] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:37.420 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.420 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:37.420 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:37.420 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:37.420 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:37.420 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:37.421 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.421 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.421 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.421 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.421 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.421 00:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.679 00:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.679 "name": "raid_bdev1", 00:18:37.679 "uuid": "cd03d682-8154-41f7-85ee-fa036f1758c5", 00:18:37.679 "strip_size_kb": 0, 00:18:37.679 "state": "online", 00:18:37.679 "raid_level": "raid1", 00:18:37.679 "superblock": true, 00:18:37.679 "num_base_bdevs": 2, 00:18:37.679 "num_base_bdevs_discovered": 1, 00:18:37.679 "num_base_bdevs_operational": 1, 00:18:37.679 "base_bdevs_list": [ 00:18:37.679 { 00:18:37.679 "name": null, 00:18:37.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.679 "is_configured": false, 00:18:37.679 "data_offset": 2048, 00:18:37.679 "data_size": 63488 00:18:37.679 }, 00:18:37.679 { 00:18:37.679 "name": "pt2", 00:18:37.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.679 "is_configured": true, 00:18:37.679 "data_offset": 2048, 00:18:37.679 "data_size": 63488 00:18:37.679 } 00:18:37.679 ] 00:18:37.679 }' 00:18:37.679 00:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.679 00:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.245 00:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.245 [2024-07-25 00:45:00.890948] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.245 [2024-07-25 00:45:00.890989] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.245 [2024-07-25 00:45:00.891055] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.245 [2024-07-25 00:45:00.891104] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.245 [2024-07-25 00:45:00.891113] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:18:38.503 00:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.503 00:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:38.503 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:38.503 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:38.503 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:38.503 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:38.503 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:38.761 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:38.761 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:38.761 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:38.761 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:38.761 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:18:38.761 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:39.019 [2024-07-25 00:45:01.439022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:39.019 [2024-07-25 00:45:01.439132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.019 [2024-07-25 00:45:01.439160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:39.019 [2024-07-25 00:45:01.439185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.019 [2024-07-25 00:45:01.441516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.019 [2024-07-25 00:45:01.441592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:39.019 [2024-07-25 00:45:01.441725] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:39.019 [2024-07-25 00:45:01.441780] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:39.019 [2024-07-25 00:45:01.441873] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:18:39.019 [2024-07-25 00:45:01.441882] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:39.019 [2024-07-25 00:45:01.441969] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:39.019 [2024-07-25 00:45:01.442240] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:18:39.019 [2024-07-25 00:45:01.442270] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:18:39.019 [2024-07-25 00:45:01.442399] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.019 pt2 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.019 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.277 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.277 "name": "raid_bdev1", 00:18:39.277 "uuid": "cd03d682-8154-41f7-85ee-fa036f1758c5", 00:18:39.277 "strip_size_kb": 0, 00:18:39.277 "state": "online", 00:18:39.277 "raid_level": "raid1", 00:18:39.277 "superblock": true, 00:18:39.277 "num_base_bdevs": 2, 00:18:39.277 "num_base_bdevs_discovered": 1, 00:18:39.277 "num_base_bdevs_operational": 1, 00:18:39.277 "base_bdevs_list": [ 00:18:39.277 { 00:18:39.277 "name": null, 00:18:39.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.277 "is_configured": false, 00:18:39.277 "data_offset": 2048, 00:18:39.277 "data_size": 63488 00:18:39.277 }, 00:18:39.277 { 00:18:39.277 "name": "pt2", 00:18:39.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.277 "is_configured": true, 00:18:39.277 "data_offset": 2048, 00:18:39.277 "data_size": 63488 00:18:39.277 } 00:18:39.277 ] 00:18:39.277 }' 00:18:39.277 00:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.277 00:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.843 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:39.843 [2024-07-25 00:45:02.483194] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.843 [2024-07-25 00:45:02.483232] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.843 [2024-07-25 00:45:02.483295] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.843 [2024-07-25 00:45:02.483340] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.843 [2024-07-25 00:45:02.483348] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:18:40.101 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:40.101 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.101 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:40.101 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:40.101 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:18:40.101 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.359 [2024-07-25 00:45:02.975276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.359 [2024-07-25 00:45:02.975383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.359 [2024-07-25 00:45:02.975424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:40.359 [2024-07-25 00:45:02.975446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.359 [2024-07-25 00:45:02.977789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.359 [2024-07-25 00:45:02.977854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.359 [2024-07-25 00:45:02.977970] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:40.359 [2024-07-25 00:45:02.978017] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.359 [2024-07-25 00:45:02.978151] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:40.359 [2024-07-25 00:45:02.978161] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.359 [2024-07-25 00:45:02.978174] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:18:40.359 [2024-07-25 00:45:02.978221] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.359 [2024-07-25 00:45:02.978312] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:18:40.359 [2024-07-25 00:45:02.978321] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:40.359 [2024-07-25 00:45:02.978417] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:40.359 [2024-07-25 00:45:02.978691] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:18:40.359 [2024-07-25 00:45:02.978701] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:18:40.359 [2024-07-25 00:45:02.978821] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.359 pt1 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.359 00:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.617 00:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.617 "name": "raid_bdev1", 00:18:40.617 "uuid": "cd03d682-8154-41f7-85ee-fa036f1758c5", 00:18:40.617 "strip_size_kb": 0, 00:18:40.617 "state": "online", 00:18:40.617 "raid_level": "raid1", 00:18:40.617 "superblock": true, 00:18:40.617 "num_base_bdevs": 2, 00:18:40.617 "num_base_bdevs_discovered": 1, 00:18:40.617 "num_base_bdevs_operational": 1, 00:18:40.617 "base_bdevs_list": [ 00:18:40.617 { 00:18:40.617 "name": null, 00:18:40.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.617 "is_configured": false, 00:18:40.617 "data_offset": 2048, 00:18:40.617 "data_size": 63488 00:18:40.617 }, 00:18:40.617 { 00:18:40.617 "name": "pt2", 00:18:40.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.617 "is_configured": true, 00:18:40.617 "data_offset": 2048, 00:18:40.617 "data_size": 63488 00:18:40.617 } 00:18:40.617 ] 00:18:40.617 }' 00:18:40.617 00:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.617 00:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.183 00:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:41.183 00:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:41.440 00:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:41.440 00:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:41.440 00:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:41.697 [2024-07-25 00:45:04.139680] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.697 00:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' cd03d682-8154-41f7-85ee-fa036f1758c5 '!=' cd03d682-8154-41f7-85ee-fa036f1758c5 ']' 00:18:41.697 00:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 125369 00:18:41.697 00:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 125369 ']' 00:18:41.697 00:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 125369 00:18:41.697 00:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:18:41.697 00:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:41.698 00:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125369 00:18:41.698 00:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:41.698 killing process with pid 125369 00:18:41.698 00:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:41.698 00:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125369' 00:18:41.698 00:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 125369 00:18:41.698 [2024-07-25 00:45:04.187397] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.698 [2024-07-25 00:45:04.187471] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.698 00:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 125369 00:18:41.698 [2024-07-25 00:45:04.187517] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.698 [2024-07-25 00:45:04.187526] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:18:41.955 [2024-07-25 00:45:04.395627] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.332 00:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:43.332 ************************************ 00:18:43.332 END TEST raid_superblock_test 00:18:43.332 ************************************ 00:18:43.332 00:18:43.332 real 0m16.476s 00:18:43.332 user 0m28.984s 00:18:43.332 sys 0m2.499s 00:18:43.332 00:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:43.332 00:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.332 00:45:05 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:18:43.332 00:45:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:43.332 00:45:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:43.332 00:45:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.332 ************************************ 00:18:43.332 START TEST raid_read_error_test 00:18:43.332 ************************************ 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:43.332 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.bA46ogwDiX 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=125901 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 125901 /var/tmp/spdk-raid.sock 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 125901 ']' 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.333 00:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.333 [2024-07-25 00:45:05.907842] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:18:43.333 [2024-07-25 00:45:05.908649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125901 ] 00:18:43.591 [2024-07-25 00:45:06.093619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.849 [2024-07-25 00:45:06.360489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.107 [2024-07-25 00:45:06.547281] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:44.366 00:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.366 00:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:44.366 00:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:44.366 00:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:44.366 BaseBdev1_malloc 00:18:44.366 00:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:44.625 true 00:18:44.625 00:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:44.884 [2024-07-25 00:45:07.327102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:44.884 [2024-07-25 00:45:07.327225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.884 [2024-07-25 00:45:07.327270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:44.884 [2024-07-25 00:45:07.327298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.884 [2024-07-25 00:45:07.329661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.884 [2024-07-25 00:45:07.329713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:44.884 BaseBdev1 00:18:44.884 00:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:44.884 00:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:45.142 BaseBdev2_malloc 00:18:45.142 00:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:45.401 true 00:18:45.401 00:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:45.401 [2024-07-25 00:45:08.005417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:45.401 [2024-07-25 00:45:08.005539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.401 [2024-07-25 00:45:08.005581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:45.401 [2024-07-25 00:45:08.005601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.401 [2024-07-25 00:45:08.007873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.401 [2024-07-25 00:45:08.007922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:45.401 BaseBdev2 00:18:45.401 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:18:45.660 [2024-07-25 00:45:08.197467] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.660 [2024-07-25 00:45:08.199431] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.660 [2024-07-25 00:45:08.199690] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:18:45.660 [2024-07-25 00:45:08.199703] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:45.660 [2024-07-25 00:45:08.199831] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:45.660 [2024-07-25 00:45:08.200170] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:18:45.660 [2024-07-25 00:45:08.200180] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:18:45.660 [2024-07-25 00:45:08.200320] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.660 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.919 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.919 "name": "raid_bdev1", 00:18:45.919 "uuid": "33a652da-0e9a-436b-8c63-eb1a5398c465", 00:18:45.919 "strip_size_kb": 0, 00:18:45.919 "state": "online", 00:18:45.919 "raid_level": "raid1", 00:18:45.919 "superblock": true, 00:18:45.919 "num_base_bdevs": 2, 00:18:45.919 "num_base_bdevs_discovered": 2, 00:18:45.919 "num_base_bdevs_operational": 2, 00:18:45.919 "base_bdevs_list": [ 00:18:45.919 { 00:18:45.919 "name": "BaseBdev1", 00:18:45.919 "uuid": "db37c29f-e0a1-58ae-9521-54c967c6bc9a", 00:18:45.919 "is_configured": true, 00:18:45.919 "data_offset": 2048, 00:18:45.919 "data_size": 63488 00:18:45.919 }, 00:18:45.919 { 00:18:45.919 "name": "BaseBdev2", 00:18:45.919 "uuid": "48d6d75c-ff30-51b1-bf06-5bb90e0d93f5", 00:18:45.919 "is_configured": true, 00:18:45.919 "data_offset": 2048, 00:18:45.919 "data_size": 63488 00:18:45.919 } 00:18:45.919 ] 00:18:45.919 }' 00:18:45.919 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.919 00:45:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.487 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:46.487 00:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:46.487 [2024-07-25 00:45:09.094967] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:47.424 00:45:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.682 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.940 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.940 "name": "raid_bdev1", 00:18:47.940 "uuid": "33a652da-0e9a-436b-8c63-eb1a5398c465", 00:18:47.940 "strip_size_kb": 0, 00:18:47.940 "state": "online", 00:18:47.940 "raid_level": "raid1", 00:18:47.940 "superblock": true, 00:18:47.940 "num_base_bdevs": 2, 00:18:47.940 "num_base_bdevs_discovered": 2, 00:18:47.940 "num_base_bdevs_operational": 2, 00:18:47.940 "base_bdevs_list": [ 00:18:47.940 { 00:18:47.940 "name": "BaseBdev1", 00:18:47.940 "uuid": "db37c29f-e0a1-58ae-9521-54c967c6bc9a", 00:18:47.940 "is_configured": true, 00:18:47.940 "data_offset": 2048, 00:18:47.940 "data_size": 63488 00:18:47.940 }, 00:18:47.940 { 00:18:47.940 "name": "BaseBdev2", 00:18:47.940 "uuid": "48d6d75c-ff30-51b1-bf06-5bb90e0d93f5", 00:18:47.940 "is_configured": true, 00:18:47.940 "data_offset": 2048, 00:18:47.940 "data_size": 63488 00:18:47.940 } 00:18:47.940 ] 00:18:47.940 }' 00:18:47.940 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.940 00:45:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.508 00:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:48.767 [2024-07-25 00:45:11.176799] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:48.767 [2024-07-25 00:45:11.176847] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:48.767 [2024-07-25 00:45:11.179301] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.767 [2024-07-25 00:45:11.179349] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.767 [2024-07-25 00:45:11.179415] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.767 [2024-07-25 00:45:11.179425] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:18:48.767 0 00:18:48.767 00:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 125901 00:18:48.767 00:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 125901 ']' 00:18:48.767 00:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 125901 00:18:48.767 00:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:18:48.767 00:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.767 00:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125901 00:18:48.767 00:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:48.767 00:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:48.767 killing process with pid 125901 00:18:48.767 00:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125901' 00:18:48.767 00:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 125901 00:18:48.767 [2024-07-25 00:45:11.216637] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:48.767 00:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 125901 00:18:48.767 [2024-07-25 00:45:11.339071] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:50.146 00:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.bA46ogwDiX 00:18:50.146 00:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:50.146 00:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:50.146 00:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:50.146 00:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:50.146 00:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:50.146 00:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:50.146 00:45:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:50.146 00:18:50.146 real 0m6.796s 00:18:50.146 user 0m9.704s 00:18:50.146 sys 0m0.972s 00:18:50.146 00:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:50.146 00:45:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.146 ************************************ 00:18:50.146 END TEST raid_read_error_test 00:18:50.146 ************************************ 00:18:50.146 00:45:12 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:18:50.146 00:45:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:50.146 00:45:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:50.146 00:45:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.146 ************************************ 00:18:50.146 START TEST raid_write_error_test 00:18:50.146 ************************************ 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.w6aRSbxWw9 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=126087 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 126087 /var/tmp/spdk-raid.sock 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 126087 ']' 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.146 00:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.146 [2024-07-25 00:45:12.771377] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:18:50.146 [2024-07-25 00:45:12.771602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126087 ] 00:18:50.404 [2024-07-25 00:45:12.960896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.663 [2024-07-25 00:45:13.161837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.921 [2024-07-25 00:45:13.349321] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.178 00:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.178 00:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:51.178 00:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:51.178 00:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:51.436 BaseBdev1_malloc 00:18:51.436 00:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:51.695 true 00:18:51.695 00:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:51.957 [2024-07-25 00:45:14.347591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:51.957 [2024-07-25 00:45:14.347711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.957 [2024-07-25 00:45:14.347749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:51.957 [2024-07-25 00:45:14.347768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.957 [2024-07-25 00:45:14.350090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.957 [2024-07-25 00:45:14.350159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:51.957 BaseBdev1 00:18:51.957 00:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:51.957 00:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:51.957 BaseBdev2_malloc 00:18:51.957 00:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:52.233 true 00:18:52.233 00:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:52.504 [2024-07-25 00:45:14.949777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:52.504 [2024-07-25 00:45:14.949886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.504 [2024-07-25 00:45:14.949941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:52.504 [2024-07-25 00:45:14.949961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.504 [2024-07-25 00:45:14.952217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.504 [2024-07-25 00:45:14.952268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:52.504 BaseBdev2 00:18:52.504 00:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:18:52.504 [2024-07-25 00:45:15.145880] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.504 [2024-07-25 00:45:15.148111] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:52.504 [2024-07-25 00:45:15.148359] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:18:52.504 [2024-07-25 00:45:15.148373] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:52.504 [2024-07-25 00:45:15.148497] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:52.504 [2024-07-25 00:45:15.148852] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:18:52.504 [2024-07-25 00:45:15.148874] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:18:52.504 [2024-07-25 00:45:15.149034] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.763 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.022 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.022 "name": "raid_bdev1", 00:18:53.022 "uuid": "8bc77a3f-5b6f-481d-b4b5-cc9f3920bd35", 00:18:53.022 "strip_size_kb": 0, 00:18:53.022 "state": "online", 00:18:53.022 "raid_level": "raid1", 00:18:53.022 "superblock": true, 00:18:53.022 "num_base_bdevs": 2, 00:18:53.022 "num_base_bdevs_discovered": 2, 00:18:53.022 "num_base_bdevs_operational": 2, 00:18:53.022 "base_bdevs_list": [ 00:18:53.022 { 00:18:53.022 "name": "BaseBdev1", 00:18:53.022 "uuid": "ee88e98a-cdbd-5399-851d-c6d221fff6d9", 00:18:53.022 "is_configured": true, 00:18:53.022 "data_offset": 2048, 00:18:53.022 "data_size": 63488 00:18:53.022 }, 00:18:53.022 { 00:18:53.022 "name": "BaseBdev2", 00:18:53.022 "uuid": "d784b2be-b4c7-507a-99ac-7123d52d2237", 00:18:53.022 "is_configured": true, 00:18:53.022 "data_offset": 2048, 00:18:53.022 "data_size": 63488 00:18:53.022 } 00:18:53.022 ] 00:18:53.022 }' 00:18:53.022 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.022 00:45:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.590 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:53.590 00:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:53.590 [2024-07-25 00:45:16.063282] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:54.528 00:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:54.788 [2024-07-25 00:45:17.193854] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:54.788 [2024-07-25 00:45:17.193968] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:54.788 [2024-07-25 00:45:17.194195] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.788 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.048 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.048 "name": "raid_bdev1", 00:18:55.048 "uuid": "8bc77a3f-5b6f-481d-b4b5-cc9f3920bd35", 00:18:55.048 "strip_size_kb": 0, 00:18:55.048 "state": "online", 00:18:55.048 "raid_level": "raid1", 00:18:55.048 "superblock": true, 00:18:55.048 "num_base_bdevs": 2, 00:18:55.048 "num_base_bdevs_discovered": 1, 00:18:55.048 "num_base_bdevs_operational": 1, 00:18:55.048 "base_bdevs_list": [ 00:18:55.048 { 00:18:55.048 "name": null, 00:18:55.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.048 "is_configured": false, 00:18:55.048 "data_offset": 2048, 00:18:55.048 "data_size": 63488 00:18:55.048 }, 00:18:55.048 { 00:18:55.048 "name": "BaseBdev2", 00:18:55.048 "uuid": "d784b2be-b4c7-507a-99ac-7123d52d2237", 00:18:55.048 "is_configured": true, 00:18:55.048 "data_offset": 2048, 00:18:55.048 "data_size": 63488 00:18:55.048 } 00:18:55.048 ] 00:18:55.048 }' 00:18:55.048 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.048 00:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.307 00:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:55.565 [2024-07-25 00:45:18.074125] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.565 [2024-07-25 00:45:18.074172] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.566 [2024-07-25 00:45:18.076594] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.566 [2024-07-25 00:45:18.076635] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.566 [2024-07-25 00:45:18.076680] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.566 [2024-07-25 00:45:18.076689] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:18:55.566 0 00:18:55.566 00:45:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 126087 00:18:55.566 00:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 126087 ']' 00:18:55.566 00:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 126087 00:18:55.566 00:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:55.566 00:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.566 00:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126087 00:18:55.566 00:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:55.566 killing process with pid 126087 00:18:55.566 00:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:55.566 00:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126087' 00:18:55.566 00:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 126087 00:18:55.566 [2024-07-25 00:45:18.129972] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:55.566 00:45:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 126087 00:18:55.825 [2024-07-25 00:45:18.251510] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.203 00:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.w6aRSbxWw9 00:18:57.203 00:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:57.203 00:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:57.203 00:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:57.203 00:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:57.203 00:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:57.203 00:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:57.203 00:45:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:57.203 00:18:57.203 real 0m6.876s 00:18:57.203 user 0m9.949s 00:18:57.203 sys 0m0.885s 00:18:57.203 00:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:57.203 00:45:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.203 ************************************ 00:18:57.203 END TEST raid_write_error_test 00:18:57.203 ************************************ 00:18:57.203 00:45:19 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:18:57.203 00:45:19 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:57.203 00:45:19 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:18:57.203 00:45:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:57.203 00:45:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:57.203 00:45:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.203 ************************************ 00:18:57.203 START TEST raid_state_function_test 00:18:57.203 ************************************ 00:18:57.203 00:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:18:57.203 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:57.203 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:57.203 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:57.203 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:57.203 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:57.203 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=126277 00:18:57.204 Process raid pid: 126277 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 126277' 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 126277 /var/tmp/spdk-raid.sock 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 126277 ']' 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:57.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:57.204 00:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.204 [2024-07-25 00:45:19.713141] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:18:57.204 [2024-07-25 00:45:19.713372] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.464 [2024-07-25 00:45:19.894656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.464 [2024-07-25 00:45:20.100495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.723 [2024-07-25 00:45:20.304211] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:58.291 [2024-07-25 00:45:20.884073] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.291 [2024-07-25 00:45:20.884180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.291 [2024-07-25 00:45:20.884191] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.291 [2024-07-25 00:45:20.884217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.291 [2024-07-25 00:45:20.884224] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:58.291 [2024-07-25 00:45:20.884240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.291 00:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.550 00:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:58.550 "name": "Existed_Raid", 00:18:58.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.550 "strip_size_kb": 64, 00:18:58.550 "state": "configuring", 00:18:58.550 "raid_level": "raid0", 00:18:58.550 "superblock": false, 00:18:58.550 "num_base_bdevs": 3, 00:18:58.550 "num_base_bdevs_discovered": 0, 00:18:58.550 "num_base_bdevs_operational": 3, 00:18:58.550 "base_bdevs_list": [ 00:18:58.550 { 00:18:58.550 "name": "BaseBdev1", 00:18:58.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.550 "is_configured": false, 00:18:58.550 "data_offset": 0, 00:18:58.550 "data_size": 0 00:18:58.550 }, 00:18:58.550 { 00:18:58.550 "name": "BaseBdev2", 00:18:58.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.550 "is_configured": false, 00:18:58.550 "data_offset": 0, 00:18:58.550 "data_size": 0 00:18:58.550 }, 00:18:58.550 { 00:18:58.550 "name": "BaseBdev3", 00:18:58.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.550 "is_configured": false, 00:18:58.550 "data_offset": 0, 00:18:58.550 "data_size": 0 00:18:58.550 } 00:18:58.550 ] 00:18:58.550 }' 00:18:58.550 00:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:58.550 00:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.119 00:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:59.378 [2024-07-25 00:45:21.852115] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:59.378 [2024-07-25 00:45:21.852156] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:59.378 00:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:59.638 [2024-07-25 00:45:22.108150] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.638 [2024-07-25 00:45:22.108237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.638 [2024-07-25 00:45:22.108247] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.638 [2024-07-25 00:45:22.108264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.638 [2024-07-25 00:45:22.108271] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:59.638 [2024-07-25 00:45:22.108292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:59.638 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:59.897 [2024-07-25 00:45:22.335784] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.897 BaseBdev1 00:18:59.897 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:59.897 00:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:59.897 00:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:59.897 00:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:59.897 00:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:59.897 00:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:59.897 00:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:59.897 00:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:00.157 [ 00:19:00.157 { 00:19:00.157 "name": "BaseBdev1", 00:19:00.157 "aliases": [ 00:19:00.157 "df37895b-af30-4f93-a7f3-c5fcfd5a9573" 00:19:00.157 ], 00:19:00.157 "product_name": "Malloc disk", 00:19:00.157 "block_size": 512, 00:19:00.157 "num_blocks": 65536, 00:19:00.157 "uuid": "df37895b-af30-4f93-a7f3-c5fcfd5a9573", 00:19:00.157 "assigned_rate_limits": { 00:19:00.157 "rw_ios_per_sec": 0, 00:19:00.157 "rw_mbytes_per_sec": 0, 00:19:00.157 "r_mbytes_per_sec": 0, 00:19:00.157 "w_mbytes_per_sec": 0 00:19:00.157 }, 00:19:00.157 "claimed": true, 00:19:00.157 "claim_type": "exclusive_write", 00:19:00.157 "zoned": false, 00:19:00.157 "supported_io_types": { 00:19:00.157 "read": true, 00:19:00.157 "write": true, 00:19:00.157 "unmap": true, 00:19:00.157 "flush": true, 00:19:00.157 "reset": true, 00:19:00.157 "nvme_admin": false, 00:19:00.157 "nvme_io": false, 00:19:00.157 "nvme_io_md": false, 00:19:00.157 "write_zeroes": true, 00:19:00.157 "zcopy": true, 00:19:00.157 "get_zone_info": false, 00:19:00.157 "zone_management": false, 00:19:00.157 "zone_append": false, 00:19:00.157 "compare": false, 00:19:00.157 "compare_and_write": false, 00:19:00.157 "abort": true, 00:19:00.157 "seek_hole": false, 00:19:00.157 "seek_data": false, 00:19:00.157 "copy": true, 00:19:00.157 "nvme_iov_md": false 00:19:00.157 }, 00:19:00.157 "memory_domains": [ 00:19:00.157 { 00:19:00.157 "dma_device_id": "system", 00:19:00.157 "dma_device_type": 1 00:19:00.157 }, 00:19:00.157 { 00:19:00.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.157 "dma_device_type": 2 00:19:00.157 } 00:19:00.157 ], 00:19:00.157 "driver_specific": {} 00:19:00.157 } 00:19:00.157 ] 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.157 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.416 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.416 "name": "Existed_Raid", 00:19:00.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.416 "strip_size_kb": 64, 00:19:00.416 "state": "configuring", 00:19:00.416 "raid_level": "raid0", 00:19:00.416 "superblock": false, 00:19:00.416 "num_base_bdevs": 3, 00:19:00.416 "num_base_bdevs_discovered": 1, 00:19:00.416 "num_base_bdevs_operational": 3, 00:19:00.416 "base_bdevs_list": [ 00:19:00.416 { 00:19:00.416 "name": "BaseBdev1", 00:19:00.416 "uuid": "df37895b-af30-4f93-a7f3-c5fcfd5a9573", 00:19:00.416 "is_configured": true, 00:19:00.416 "data_offset": 0, 00:19:00.416 "data_size": 65536 00:19:00.416 }, 00:19:00.416 { 00:19:00.416 "name": "BaseBdev2", 00:19:00.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.417 "is_configured": false, 00:19:00.417 "data_offset": 0, 00:19:00.417 "data_size": 0 00:19:00.417 }, 00:19:00.417 { 00:19:00.417 "name": "BaseBdev3", 00:19:00.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.417 "is_configured": false, 00:19:00.417 "data_offset": 0, 00:19:00.417 "data_size": 0 00:19:00.417 } 00:19:00.417 ] 00:19:00.417 }' 00:19:00.417 00:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.417 00:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.983 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:01.241 [2024-07-25 00:45:23.716026] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:01.241 [2024-07-25 00:45:23.716093] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:01.241 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:01.500 [2024-07-25 00:45:23.900110] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.500 [2024-07-25 00:45:23.902095] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:01.500 [2024-07-25 00:45:23.902187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:01.500 [2024-07-25 00:45:23.902198] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:01.500 [2024-07-25 00:45:23.902245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.500 00:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.500 00:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.500 "name": "Existed_Raid", 00:19:01.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.500 "strip_size_kb": 64, 00:19:01.500 "state": "configuring", 00:19:01.500 "raid_level": "raid0", 00:19:01.500 "superblock": false, 00:19:01.500 "num_base_bdevs": 3, 00:19:01.500 "num_base_bdevs_discovered": 1, 00:19:01.500 "num_base_bdevs_operational": 3, 00:19:01.500 "base_bdevs_list": [ 00:19:01.500 { 00:19:01.500 "name": "BaseBdev1", 00:19:01.500 "uuid": "df37895b-af30-4f93-a7f3-c5fcfd5a9573", 00:19:01.500 "is_configured": true, 00:19:01.500 "data_offset": 0, 00:19:01.500 "data_size": 65536 00:19:01.500 }, 00:19:01.500 { 00:19:01.500 "name": "BaseBdev2", 00:19:01.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.500 "is_configured": false, 00:19:01.500 "data_offset": 0, 00:19:01.500 "data_size": 0 00:19:01.500 }, 00:19:01.500 { 00:19:01.500 "name": "BaseBdev3", 00:19:01.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.500 "is_configured": false, 00:19:01.500 "data_offset": 0, 00:19:01.500 "data_size": 0 00:19:01.500 } 00:19:01.500 ] 00:19:01.500 }' 00:19:01.500 00:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.500 00:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.067 00:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:02.326 [2024-07-25 00:45:24.963903] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.326 BaseBdev2 00:19:02.584 00:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:02.584 00:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:02.584 00:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:02.584 00:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:02.584 00:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:02.584 00:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:02.584 00:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:02.844 [ 00:19:02.844 { 00:19:02.844 "name": "BaseBdev2", 00:19:02.844 "aliases": [ 00:19:02.844 "d3d7b87e-a402-4ecc-9199-a3aeeca10de3" 00:19:02.844 ], 00:19:02.844 "product_name": "Malloc disk", 00:19:02.844 "block_size": 512, 00:19:02.844 "num_blocks": 65536, 00:19:02.844 "uuid": "d3d7b87e-a402-4ecc-9199-a3aeeca10de3", 00:19:02.844 "assigned_rate_limits": { 00:19:02.844 "rw_ios_per_sec": 0, 00:19:02.844 "rw_mbytes_per_sec": 0, 00:19:02.844 "r_mbytes_per_sec": 0, 00:19:02.844 "w_mbytes_per_sec": 0 00:19:02.844 }, 00:19:02.844 "claimed": true, 00:19:02.844 "claim_type": "exclusive_write", 00:19:02.844 "zoned": false, 00:19:02.844 "supported_io_types": { 00:19:02.844 "read": true, 00:19:02.844 "write": true, 00:19:02.844 "unmap": true, 00:19:02.844 "flush": true, 00:19:02.844 "reset": true, 00:19:02.844 "nvme_admin": false, 00:19:02.844 "nvme_io": false, 00:19:02.844 "nvme_io_md": false, 00:19:02.844 "write_zeroes": true, 00:19:02.844 "zcopy": true, 00:19:02.844 "get_zone_info": false, 00:19:02.844 "zone_management": false, 00:19:02.844 "zone_append": false, 00:19:02.844 "compare": false, 00:19:02.844 "compare_and_write": false, 00:19:02.844 "abort": true, 00:19:02.844 "seek_hole": false, 00:19:02.844 "seek_data": false, 00:19:02.844 "copy": true, 00:19:02.844 "nvme_iov_md": false 00:19:02.844 }, 00:19:02.844 "memory_domains": [ 00:19:02.844 { 00:19:02.844 "dma_device_id": "system", 00:19:02.844 "dma_device_type": 1 00:19:02.844 }, 00:19:02.844 { 00:19:02.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.844 "dma_device_type": 2 00:19:02.844 } 00:19:02.844 ], 00:19:02.844 "driver_specific": {} 00:19:02.844 } 00:19:02.844 ] 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.844 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.103 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.103 "name": "Existed_Raid", 00:19:03.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.103 "strip_size_kb": 64, 00:19:03.103 "state": "configuring", 00:19:03.103 "raid_level": "raid0", 00:19:03.103 "superblock": false, 00:19:03.103 "num_base_bdevs": 3, 00:19:03.103 "num_base_bdevs_discovered": 2, 00:19:03.103 "num_base_bdevs_operational": 3, 00:19:03.103 "base_bdevs_list": [ 00:19:03.103 { 00:19:03.103 "name": "BaseBdev1", 00:19:03.103 "uuid": "df37895b-af30-4f93-a7f3-c5fcfd5a9573", 00:19:03.103 "is_configured": true, 00:19:03.103 "data_offset": 0, 00:19:03.103 "data_size": 65536 00:19:03.103 }, 00:19:03.103 { 00:19:03.103 "name": "BaseBdev2", 00:19:03.103 "uuid": "d3d7b87e-a402-4ecc-9199-a3aeeca10de3", 00:19:03.103 "is_configured": true, 00:19:03.103 "data_offset": 0, 00:19:03.103 "data_size": 65536 00:19:03.103 }, 00:19:03.103 { 00:19:03.103 "name": "BaseBdev3", 00:19:03.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.103 "is_configured": false, 00:19:03.103 "data_offset": 0, 00:19:03.103 "data_size": 0 00:19:03.103 } 00:19:03.103 ] 00:19:03.103 }' 00:19:03.103 00:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.103 00:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.671 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:03.671 [2024-07-25 00:45:26.321558] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.671 [2024-07-25 00:45:26.321605] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:03.671 [2024-07-25 00:45:26.321629] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:03.671 [2024-07-25 00:45:26.321748] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:03.671 [2024-07-25 00:45:26.322032] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:03.671 [2024-07-25 00:45:26.322058] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:03.671 [2024-07-25 00:45:26.322326] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.671 BaseBdev3 00:19:03.930 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:03.930 00:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:03.930 00:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:03.930 00:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:03.930 00:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:03.930 00:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:03.930 00:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:03.930 00:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:04.189 [ 00:19:04.189 { 00:19:04.189 "name": "BaseBdev3", 00:19:04.189 "aliases": [ 00:19:04.189 "e0aec085-2309-41fc-a200-519715542c03" 00:19:04.189 ], 00:19:04.189 "product_name": "Malloc disk", 00:19:04.189 "block_size": 512, 00:19:04.189 "num_blocks": 65536, 00:19:04.189 "uuid": "e0aec085-2309-41fc-a200-519715542c03", 00:19:04.189 "assigned_rate_limits": { 00:19:04.189 "rw_ios_per_sec": 0, 00:19:04.189 "rw_mbytes_per_sec": 0, 00:19:04.189 "r_mbytes_per_sec": 0, 00:19:04.189 "w_mbytes_per_sec": 0 00:19:04.189 }, 00:19:04.189 "claimed": true, 00:19:04.189 "claim_type": "exclusive_write", 00:19:04.189 "zoned": false, 00:19:04.189 "supported_io_types": { 00:19:04.189 "read": true, 00:19:04.189 "write": true, 00:19:04.189 "unmap": true, 00:19:04.189 "flush": true, 00:19:04.189 "reset": true, 00:19:04.189 "nvme_admin": false, 00:19:04.189 "nvme_io": false, 00:19:04.189 "nvme_io_md": false, 00:19:04.189 "write_zeroes": true, 00:19:04.189 "zcopy": true, 00:19:04.189 "get_zone_info": false, 00:19:04.189 "zone_management": false, 00:19:04.189 "zone_append": false, 00:19:04.189 "compare": false, 00:19:04.189 "compare_and_write": false, 00:19:04.189 "abort": true, 00:19:04.189 "seek_hole": false, 00:19:04.189 "seek_data": false, 00:19:04.189 "copy": true, 00:19:04.189 "nvme_iov_md": false 00:19:04.189 }, 00:19:04.189 "memory_domains": [ 00:19:04.189 { 00:19:04.189 "dma_device_id": "system", 00:19:04.189 "dma_device_type": 1 00:19:04.189 }, 00:19:04.189 { 00:19:04.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.189 "dma_device_type": 2 00:19:04.189 } 00:19:04.189 ], 00:19:04.189 "driver_specific": {} 00:19:04.189 } 00:19:04.189 ] 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.189 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.449 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.449 "name": "Existed_Raid", 00:19:04.449 "uuid": "48e3cad4-a2e2-47cb-aba5-3eef2d0c52c9", 00:19:04.449 "strip_size_kb": 64, 00:19:04.449 "state": "online", 00:19:04.449 "raid_level": "raid0", 00:19:04.449 "superblock": false, 00:19:04.449 "num_base_bdevs": 3, 00:19:04.449 "num_base_bdevs_discovered": 3, 00:19:04.449 "num_base_bdevs_operational": 3, 00:19:04.449 "base_bdevs_list": [ 00:19:04.449 { 00:19:04.449 "name": "BaseBdev1", 00:19:04.449 "uuid": "df37895b-af30-4f93-a7f3-c5fcfd5a9573", 00:19:04.449 "is_configured": true, 00:19:04.449 "data_offset": 0, 00:19:04.449 "data_size": 65536 00:19:04.449 }, 00:19:04.449 { 00:19:04.449 "name": "BaseBdev2", 00:19:04.449 "uuid": "d3d7b87e-a402-4ecc-9199-a3aeeca10de3", 00:19:04.449 "is_configured": true, 00:19:04.449 "data_offset": 0, 00:19:04.449 "data_size": 65536 00:19:04.449 }, 00:19:04.449 { 00:19:04.449 "name": "BaseBdev3", 00:19:04.449 "uuid": "e0aec085-2309-41fc-a200-519715542c03", 00:19:04.449 "is_configured": true, 00:19:04.449 "data_offset": 0, 00:19:04.449 "data_size": 65536 00:19:04.449 } 00:19:04.449 ] 00:19:04.449 }' 00:19:04.449 00:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.449 00:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.017 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:05.017 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:05.017 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:05.017 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:05.017 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:05.017 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:05.017 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:05.017 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:05.277 [2024-07-25 00:45:27.802128] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.277 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:05.277 "name": "Existed_Raid", 00:19:05.277 "aliases": [ 00:19:05.277 "48e3cad4-a2e2-47cb-aba5-3eef2d0c52c9" 00:19:05.277 ], 00:19:05.277 "product_name": "Raid Volume", 00:19:05.277 "block_size": 512, 00:19:05.277 "num_blocks": 196608, 00:19:05.277 "uuid": "48e3cad4-a2e2-47cb-aba5-3eef2d0c52c9", 00:19:05.277 "assigned_rate_limits": { 00:19:05.277 "rw_ios_per_sec": 0, 00:19:05.277 "rw_mbytes_per_sec": 0, 00:19:05.277 "r_mbytes_per_sec": 0, 00:19:05.277 "w_mbytes_per_sec": 0 00:19:05.277 }, 00:19:05.277 "claimed": false, 00:19:05.277 "zoned": false, 00:19:05.277 "supported_io_types": { 00:19:05.277 "read": true, 00:19:05.277 "write": true, 00:19:05.277 "unmap": true, 00:19:05.277 "flush": true, 00:19:05.277 "reset": true, 00:19:05.277 "nvme_admin": false, 00:19:05.277 "nvme_io": false, 00:19:05.277 "nvme_io_md": false, 00:19:05.277 "write_zeroes": true, 00:19:05.277 "zcopy": false, 00:19:05.277 "get_zone_info": false, 00:19:05.277 "zone_management": false, 00:19:05.277 "zone_append": false, 00:19:05.277 "compare": false, 00:19:05.277 "compare_and_write": false, 00:19:05.277 "abort": false, 00:19:05.277 "seek_hole": false, 00:19:05.277 "seek_data": false, 00:19:05.277 "copy": false, 00:19:05.277 "nvme_iov_md": false 00:19:05.277 }, 00:19:05.277 "memory_domains": [ 00:19:05.277 { 00:19:05.277 "dma_device_id": "system", 00:19:05.277 "dma_device_type": 1 00:19:05.277 }, 00:19:05.277 { 00:19:05.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.277 "dma_device_type": 2 00:19:05.277 }, 00:19:05.277 { 00:19:05.277 "dma_device_id": "system", 00:19:05.277 "dma_device_type": 1 00:19:05.277 }, 00:19:05.277 { 00:19:05.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.277 "dma_device_type": 2 00:19:05.277 }, 00:19:05.277 { 00:19:05.277 "dma_device_id": "system", 00:19:05.277 "dma_device_type": 1 00:19:05.277 }, 00:19:05.277 { 00:19:05.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.277 "dma_device_type": 2 00:19:05.277 } 00:19:05.277 ], 00:19:05.277 "driver_specific": { 00:19:05.277 "raid": { 00:19:05.277 "uuid": "48e3cad4-a2e2-47cb-aba5-3eef2d0c52c9", 00:19:05.277 "strip_size_kb": 64, 00:19:05.277 "state": "online", 00:19:05.277 "raid_level": "raid0", 00:19:05.277 "superblock": false, 00:19:05.277 "num_base_bdevs": 3, 00:19:05.277 "num_base_bdevs_discovered": 3, 00:19:05.277 "num_base_bdevs_operational": 3, 00:19:05.277 "base_bdevs_list": [ 00:19:05.277 { 00:19:05.277 "name": "BaseBdev1", 00:19:05.277 "uuid": "df37895b-af30-4f93-a7f3-c5fcfd5a9573", 00:19:05.277 "is_configured": true, 00:19:05.277 "data_offset": 0, 00:19:05.277 "data_size": 65536 00:19:05.277 }, 00:19:05.277 { 00:19:05.277 "name": "BaseBdev2", 00:19:05.277 "uuid": "d3d7b87e-a402-4ecc-9199-a3aeeca10de3", 00:19:05.277 "is_configured": true, 00:19:05.277 "data_offset": 0, 00:19:05.277 "data_size": 65536 00:19:05.277 }, 00:19:05.277 { 00:19:05.277 "name": "BaseBdev3", 00:19:05.277 "uuid": "e0aec085-2309-41fc-a200-519715542c03", 00:19:05.277 "is_configured": true, 00:19:05.277 "data_offset": 0, 00:19:05.277 "data_size": 65536 00:19:05.277 } 00:19:05.277 ] 00:19:05.277 } 00:19:05.277 } 00:19:05.277 }' 00:19:05.277 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:05.277 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:05.277 BaseBdev2 00:19:05.277 BaseBdev3' 00:19:05.277 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:05.277 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:05.277 00:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.536 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.536 "name": "BaseBdev1", 00:19:05.536 "aliases": [ 00:19:05.536 "df37895b-af30-4f93-a7f3-c5fcfd5a9573" 00:19:05.536 ], 00:19:05.536 "product_name": "Malloc disk", 00:19:05.536 "block_size": 512, 00:19:05.536 "num_blocks": 65536, 00:19:05.536 "uuid": "df37895b-af30-4f93-a7f3-c5fcfd5a9573", 00:19:05.536 "assigned_rate_limits": { 00:19:05.536 "rw_ios_per_sec": 0, 00:19:05.536 "rw_mbytes_per_sec": 0, 00:19:05.536 "r_mbytes_per_sec": 0, 00:19:05.536 "w_mbytes_per_sec": 0 00:19:05.536 }, 00:19:05.536 "claimed": true, 00:19:05.536 "claim_type": "exclusive_write", 00:19:05.536 "zoned": false, 00:19:05.536 "supported_io_types": { 00:19:05.536 "read": true, 00:19:05.536 "write": true, 00:19:05.536 "unmap": true, 00:19:05.536 "flush": true, 00:19:05.536 "reset": true, 00:19:05.536 "nvme_admin": false, 00:19:05.536 "nvme_io": false, 00:19:05.536 "nvme_io_md": false, 00:19:05.536 "write_zeroes": true, 00:19:05.536 "zcopy": true, 00:19:05.536 "get_zone_info": false, 00:19:05.536 "zone_management": false, 00:19:05.536 "zone_append": false, 00:19:05.536 "compare": false, 00:19:05.536 "compare_and_write": false, 00:19:05.536 "abort": true, 00:19:05.536 "seek_hole": false, 00:19:05.536 "seek_data": false, 00:19:05.536 "copy": true, 00:19:05.536 "nvme_iov_md": false 00:19:05.536 }, 00:19:05.536 "memory_domains": [ 00:19:05.536 { 00:19:05.536 "dma_device_id": "system", 00:19:05.536 "dma_device_type": 1 00:19:05.536 }, 00:19:05.536 { 00:19:05.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.536 "dma_device_type": 2 00:19:05.536 } 00:19:05.536 ], 00:19:05.536 "driver_specific": {} 00:19:05.536 }' 00:19:05.536 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.536 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.536 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.536 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.536 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.796 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.796 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.796 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.796 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.796 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.796 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.796 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.796 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:05.796 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:05.796 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:06.055 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:06.055 "name": "BaseBdev2", 00:19:06.055 "aliases": [ 00:19:06.055 "d3d7b87e-a402-4ecc-9199-a3aeeca10de3" 00:19:06.055 ], 00:19:06.055 "product_name": "Malloc disk", 00:19:06.055 "block_size": 512, 00:19:06.055 "num_blocks": 65536, 00:19:06.055 "uuid": "d3d7b87e-a402-4ecc-9199-a3aeeca10de3", 00:19:06.055 "assigned_rate_limits": { 00:19:06.055 "rw_ios_per_sec": 0, 00:19:06.055 "rw_mbytes_per_sec": 0, 00:19:06.055 "r_mbytes_per_sec": 0, 00:19:06.055 "w_mbytes_per_sec": 0 00:19:06.055 }, 00:19:06.055 "claimed": true, 00:19:06.055 "claim_type": "exclusive_write", 00:19:06.055 "zoned": false, 00:19:06.055 "supported_io_types": { 00:19:06.055 "read": true, 00:19:06.055 "write": true, 00:19:06.055 "unmap": true, 00:19:06.055 "flush": true, 00:19:06.055 "reset": true, 00:19:06.055 "nvme_admin": false, 00:19:06.055 "nvme_io": false, 00:19:06.055 "nvme_io_md": false, 00:19:06.055 "write_zeroes": true, 00:19:06.055 "zcopy": true, 00:19:06.055 "get_zone_info": false, 00:19:06.055 "zone_management": false, 00:19:06.055 "zone_append": false, 00:19:06.055 "compare": false, 00:19:06.055 "compare_and_write": false, 00:19:06.055 "abort": true, 00:19:06.055 "seek_hole": false, 00:19:06.055 "seek_data": false, 00:19:06.055 "copy": true, 00:19:06.055 "nvme_iov_md": false 00:19:06.055 }, 00:19:06.055 "memory_domains": [ 00:19:06.055 { 00:19:06.055 "dma_device_id": "system", 00:19:06.055 "dma_device_type": 1 00:19:06.055 }, 00:19:06.055 { 00:19:06.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.055 "dma_device_type": 2 00:19:06.055 } 00:19:06.055 ], 00:19:06.055 "driver_specific": {} 00:19:06.055 }' 00:19:06.055 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.055 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.314 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:06.314 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.314 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.314 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:06.314 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.314 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.314 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:06.314 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.314 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.574 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:06.574 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:06.574 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:06.574 00:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:06.832 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:06.832 "name": "BaseBdev3", 00:19:06.832 "aliases": [ 00:19:06.832 "e0aec085-2309-41fc-a200-519715542c03" 00:19:06.832 ], 00:19:06.832 "product_name": "Malloc disk", 00:19:06.832 "block_size": 512, 00:19:06.832 "num_blocks": 65536, 00:19:06.832 "uuid": "e0aec085-2309-41fc-a200-519715542c03", 00:19:06.832 "assigned_rate_limits": { 00:19:06.832 "rw_ios_per_sec": 0, 00:19:06.832 "rw_mbytes_per_sec": 0, 00:19:06.832 "r_mbytes_per_sec": 0, 00:19:06.832 "w_mbytes_per_sec": 0 00:19:06.832 }, 00:19:06.832 "claimed": true, 00:19:06.833 "claim_type": "exclusive_write", 00:19:06.833 "zoned": false, 00:19:06.833 "supported_io_types": { 00:19:06.833 "read": true, 00:19:06.833 "write": true, 00:19:06.833 "unmap": true, 00:19:06.833 "flush": true, 00:19:06.833 "reset": true, 00:19:06.833 "nvme_admin": false, 00:19:06.833 "nvme_io": false, 00:19:06.833 "nvme_io_md": false, 00:19:06.833 "write_zeroes": true, 00:19:06.833 "zcopy": true, 00:19:06.833 "get_zone_info": false, 00:19:06.833 "zone_management": false, 00:19:06.833 "zone_append": false, 00:19:06.833 "compare": false, 00:19:06.833 "compare_and_write": false, 00:19:06.833 "abort": true, 00:19:06.833 "seek_hole": false, 00:19:06.833 "seek_data": false, 00:19:06.833 "copy": true, 00:19:06.833 "nvme_iov_md": false 00:19:06.833 }, 00:19:06.833 "memory_domains": [ 00:19:06.833 { 00:19:06.833 "dma_device_id": "system", 00:19:06.833 "dma_device_type": 1 00:19:06.833 }, 00:19:06.833 { 00:19:06.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.833 "dma_device_type": 2 00:19:06.833 } 00:19:06.833 ], 00:19:06.833 "driver_specific": {} 00:19:06.833 }' 00:19:06.833 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.833 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.833 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:06.833 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.833 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.833 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:06.833 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.833 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.833 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:06.833 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.092 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.092 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:07.092 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:07.092 [2024-07-25 00:45:29.698616] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:07.092 [2024-07-25 00:45:29.698651] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.092 [2024-07-25 00:45:29.698711] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.351 00:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.610 00:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.610 "name": "Existed_Raid", 00:19:07.610 "uuid": "48e3cad4-a2e2-47cb-aba5-3eef2d0c52c9", 00:19:07.610 "strip_size_kb": 64, 00:19:07.610 "state": "offline", 00:19:07.610 "raid_level": "raid0", 00:19:07.610 "superblock": false, 00:19:07.610 "num_base_bdevs": 3, 00:19:07.610 "num_base_bdevs_discovered": 2, 00:19:07.610 "num_base_bdevs_operational": 2, 00:19:07.610 "base_bdevs_list": [ 00:19:07.610 { 00:19:07.610 "name": null, 00:19:07.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.610 "is_configured": false, 00:19:07.610 "data_offset": 0, 00:19:07.610 "data_size": 65536 00:19:07.610 }, 00:19:07.610 { 00:19:07.610 "name": "BaseBdev2", 00:19:07.610 "uuid": "d3d7b87e-a402-4ecc-9199-a3aeeca10de3", 00:19:07.610 "is_configured": true, 00:19:07.610 "data_offset": 0, 00:19:07.610 "data_size": 65536 00:19:07.610 }, 00:19:07.610 { 00:19:07.610 "name": "BaseBdev3", 00:19:07.610 "uuid": "e0aec085-2309-41fc-a200-519715542c03", 00:19:07.610 "is_configured": true, 00:19:07.610 "data_offset": 0, 00:19:07.610 "data_size": 65536 00:19:07.610 } 00:19:07.610 ] 00:19:07.610 }' 00:19:07.610 00:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.610 00:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.178 00:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:08.178 00:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:08.178 00:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.178 00:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:08.462 00:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:08.462 00:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:08.462 00:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:08.737 [2024-07-25 00:45:31.162613] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:08.737 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:08.737 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:08.737 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.737 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:09.001 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:09.001 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.001 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:09.001 [2024-07-25 00:45:31.632118] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:09.001 [2024-07-25 00:45:31.632170] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:09.260 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:09.260 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:09.260 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.260 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:09.519 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:09.519 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:09.519 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:09.519 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:09.519 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:09.520 00:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:09.520 BaseBdev2 00:19:09.520 00:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:09.520 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:09.520 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:09.520 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:09.520 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:09.520 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:09.520 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.779 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:10.038 [ 00:19:10.038 { 00:19:10.038 "name": "BaseBdev2", 00:19:10.038 "aliases": [ 00:19:10.038 "52271cab-e78e-4cb7-9556-94e2b88236fc" 00:19:10.038 ], 00:19:10.038 "product_name": "Malloc disk", 00:19:10.038 "block_size": 512, 00:19:10.038 "num_blocks": 65536, 00:19:10.038 "uuid": "52271cab-e78e-4cb7-9556-94e2b88236fc", 00:19:10.038 "assigned_rate_limits": { 00:19:10.038 "rw_ios_per_sec": 0, 00:19:10.038 "rw_mbytes_per_sec": 0, 00:19:10.038 "r_mbytes_per_sec": 0, 00:19:10.038 "w_mbytes_per_sec": 0 00:19:10.038 }, 00:19:10.038 "claimed": false, 00:19:10.038 "zoned": false, 00:19:10.038 "supported_io_types": { 00:19:10.038 "read": true, 00:19:10.038 "write": true, 00:19:10.038 "unmap": true, 00:19:10.038 "flush": true, 00:19:10.038 "reset": true, 00:19:10.038 "nvme_admin": false, 00:19:10.038 "nvme_io": false, 00:19:10.038 "nvme_io_md": false, 00:19:10.038 "write_zeroes": true, 00:19:10.038 "zcopy": true, 00:19:10.038 "get_zone_info": false, 00:19:10.038 "zone_management": false, 00:19:10.038 "zone_append": false, 00:19:10.038 "compare": false, 00:19:10.038 "compare_and_write": false, 00:19:10.038 "abort": true, 00:19:10.038 "seek_hole": false, 00:19:10.038 "seek_data": false, 00:19:10.038 "copy": true, 00:19:10.038 "nvme_iov_md": false 00:19:10.038 }, 00:19:10.038 "memory_domains": [ 00:19:10.038 { 00:19:10.038 "dma_device_id": "system", 00:19:10.038 "dma_device_type": 1 00:19:10.038 }, 00:19:10.038 { 00:19:10.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.038 "dma_device_type": 2 00:19:10.038 } 00:19:10.038 ], 00:19:10.038 "driver_specific": {} 00:19:10.038 } 00:19:10.038 ] 00:19:10.038 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:10.038 00:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:10.038 00:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:10.038 00:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:10.297 BaseBdev3 00:19:10.297 00:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:10.297 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:10.297 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:10.297 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:10.297 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:10.297 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:10.297 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:10.297 00:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:10.556 [ 00:19:10.556 { 00:19:10.556 "name": "BaseBdev3", 00:19:10.556 "aliases": [ 00:19:10.556 "efddd3a4-95aa-4633-bc31-b32617e16263" 00:19:10.556 ], 00:19:10.556 "product_name": "Malloc disk", 00:19:10.556 "block_size": 512, 00:19:10.556 "num_blocks": 65536, 00:19:10.556 "uuid": "efddd3a4-95aa-4633-bc31-b32617e16263", 00:19:10.556 "assigned_rate_limits": { 00:19:10.556 "rw_ios_per_sec": 0, 00:19:10.556 "rw_mbytes_per_sec": 0, 00:19:10.556 "r_mbytes_per_sec": 0, 00:19:10.556 "w_mbytes_per_sec": 0 00:19:10.556 }, 00:19:10.556 "claimed": false, 00:19:10.556 "zoned": false, 00:19:10.556 "supported_io_types": { 00:19:10.556 "read": true, 00:19:10.556 "write": true, 00:19:10.556 "unmap": true, 00:19:10.556 "flush": true, 00:19:10.556 "reset": true, 00:19:10.556 "nvme_admin": false, 00:19:10.556 "nvme_io": false, 00:19:10.556 "nvme_io_md": false, 00:19:10.556 "write_zeroes": true, 00:19:10.556 "zcopy": true, 00:19:10.556 "get_zone_info": false, 00:19:10.556 "zone_management": false, 00:19:10.556 "zone_append": false, 00:19:10.556 "compare": false, 00:19:10.556 "compare_and_write": false, 00:19:10.556 "abort": true, 00:19:10.556 "seek_hole": false, 00:19:10.556 "seek_data": false, 00:19:10.556 "copy": true, 00:19:10.556 "nvme_iov_md": false 00:19:10.556 }, 00:19:10.556 "memory_domains": [ 00:19:10.556 { 00:19:10.556 "dma_device_id": "system", 00:19:10.556 "dma_device_type": 1 00:19:10.556 }, 00:19:10.556 { 00:19:10.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.556 "dma_device_type": 2 00:19:10.556 } 00:19:10.556 ], 00:19:10.556 "driver_specific": {} 00:19:10.556 } 00:19:10.556 ] 00:19:10.556 00:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:10.556 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:10.556 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:10.556 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:10.815 [2024-07-25 00:45:33.240632] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:10.815 [2024-07-25 00:45:33.240693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:10.815 [2024-07-25 00:45:33.240734] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:10.815 [2024-07-25 00:45:33.242567] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.815 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.074 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.074 "name": "Existed_Raid", 00:19:11.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.074 "strip_size_kb": 64, 00:19:11.074 "state": "configuring", 00:19:11.074 "raid_level": "raid0", 00:19:11.074 "superblock": false, 00:19:11.074 "num_base_bdevs": 3, 00:19:11.074 "num_base_bdevs_discovered": 2, 00:19:11.074 "num_base_bdevs_operational": 3, 00:19:11.074 "base_bdevs_list": [ 00:19:11.074 { 00:19:11.074 "name": "BaseBdev1", 00:19:11.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.074 "is_configured": false, 00:19:11.074 "data_offset": 0, 00:19:11.074 "data_size": 0 00:19:11.074 }, 00:19:11.074 { 00:19:11.074 "name": "BaseBdev2", 00:19:11.074 "uuid": "52271cab-e78e-4cb7-9556-94e2b88236fc", 00:19:11.074 "is_configured": true, 00:19:11.074 "data_offset": 0, 00:19:11.074 "data_size": 65536 00:19:11.074 }, 00:19:11.074 { 00:19:11.074 "name": "BaseBdev3", 00:19:11.074 "uuid": "efddd3a4-95aa-4633-bc31-b32617e16263", 00:19:11.074 "is_configured": true, 00:19:11.074 "data_offset": 0, 00:19:11.074 "data_size": 65536 00:19:11.074 } 00:19:11.074 ] 00:19:11.074 }' 00:19:11.074 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.074 00:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.640 00:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:11.640 [2024-07-25 00:45:34.236783] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:11.640 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:11.640 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:11.640 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:11.640 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:11.640 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:11.641 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:11.641 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:11.641 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:11.641 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:11.641 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.641 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.641 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.900 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.900 "name": "Existed_Raid", 00:19:11.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.900 "strip_size_kb": 64, 00:19:11.900 "state": "configuring", 00:19:11.900 "raid_level": "raid0", 00:19:11.900 "superblock": false, 00:19:11.900 "num_base_bdevs": 3, 00:19:11.900 "num_base_bdevs_discovered": 1, 00:19:11.900 "num_base_bdevs_operational": 3, 00:19:11.900 "base_bdevs_list": [ 00:19:11.900 { 00:19:11.900 "name": "BaseBdev1", 00:19:11.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.900 "is_configured": false, 00:19:11.900 "data_offset": 0, 00:19:11.900 "data_size": 0 00:19:11.900 }, 00:19:11.900 { 00:19:11.900 "name": null, 00:19:11.900 "uuid": "52271cab-e78e-4cb7-9556-94e2b88236fc", 00:19:11.900 "is_configured": false, 00:19:11.900 "data_offset": 0, 00:19:11.900 "data_size": 65536 00:19:11.900 }, 00:19:11.900 { 00:19:11.900 "name": "BaseBdev3", 00:19:11.900 "uuid": "efddd3a4-95aa-4633-bc31-b32617e16263", 00:19:11.900 "is_configured": true, 00:19:11.900 "data_offset": 0, 00:19:11.900 "data_size": 65536 00:19:11.900 } 00:19:11.900 ] 00:19:11.900 }' 00:19:11.900 00:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.900 00:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.468 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.468 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:12.727 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:12.727 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:12.986 [2024-07-25 00:45:35.559047] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.986 BaseBdev1 00:19:12.986 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:12.986 00:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:12.986 00:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:12.986 00:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:12.986 00:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:12.986 00:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:12.986 00:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.245 00:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:13.504 [ 00:19:13.504 { 00:19:13.504 "name": "BaseBdev1", 00:19:13.504 "aliases": [ 00:19:13.504 "d6b943f4-90fd-49d9-950d-c4225b1477ee" 00:19:13.504 ], 00:19:13.504 "product_name": "Malloc disk", 00:19:13.504 "block_size": 512, 00:19:13.504 "num_blocks": 65536, 00:19:13.504 "uuid": "d6b943f4-90fd-49d9-950d-c4225b1477ee", 00:19:13.504 "assigned_rate_limits": { 00:19:13.504 "rw_ios_per_sec": 0, 00:19:13.504 "rw_mbytes_per_sec": 0, 00:19:13.504 "r_mbytes_per_sec": 0, 00:19:13.504 "w_mbytes_per_sec": 0 00:19:13.504 }, 00:19:13.504 "claimed": true, 00:19:13.504 "claim_type": "exclusive_write", 00:19:13.504 "zoned": false, 00:19:13.504 "supported_io_types": { 00:19:13.504 "read": true, 00:19:13.504 "write": true, 00:19:13.504 "unmap": true, 00:19:13.504 "flush": true, 00:19:13.504 "reset": true, 00:19:13.504 "nvme_admin": false, 00:19:13.504 "nvme_io": false, 00:19:13.504 "nvme_io_md": false, 00:19:13.504 "write_zeroes": true, 00:19:13.504 "zcopy": true, 00:19:13.504 "get_zone_info": false, 00:19:13.504 "zone_management": false, 00:19:13.504 "zone_append": false, 00:19:13.504 "compare": false, 00:19:13.504 "compare_and_write": false, 00:19:13.504 "abort": true, 00:19:13.504 "seek_hole": false, 00:19:13.504 "seek_data": false, 00:19:13.504 "copy": true, 00:19:13.504 "nvme_iov_md": false 00:19:13.504 }, 00:19:13.504 "memory_domains": [ 00:19:13.504 { 00:19:13.504 "dma_device_id": "system", 00:19:13.504 "dma_device_type": 1 00:19:13.504 }, 00:19:13.504 { 00:19:13.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.504 "dma_device_type": 2 00:19:13.504 } 00:19:13.504 ], 00:19:13.504 "driver_specific": {} 00:19:13.504 } 00:19:13.504 ] 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.504 00:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.763 00:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.763 "name": "Existed_Raid", 00:19:13.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.763 "strip_size_kb": 64, 00:19:13.763 "state": "configuring", 00:19:13.763 "raid_level": "raid0", 00:19:13.763 "superblock": false, 00:19:13.763 "num_base_bdevs": 3, 00:19:13.763 "num_base_bdevs_discovered": 2, 00:19:13.763 "num_base_bdevs_operational": 3, 00:19:13.763 "base_bdevs_list": [ 00:19:13.763 { 00:19:13.763 "name": "BaseBdev1", 00:19:13.763 "uuid": "d6b943f4-90fd-49d9-950d-c4225b1477ee", 00:19:13.763 "is_configured": true, 00:19:13.763 "data_offset": 0, 00:19:13.763 "data_size": 65536 00:19:13.763 }, 00:19:13.763 { 00:19:13.763 "name": null, 00:19:13.763 "uuid": "52271cab-e78e-4cb7-9556-94e2b88236fc", 00:19:13.763 "is_configured": false, 00:19:13.763 "data_offset": 0, 00:19:13.763 "data_size": 65536 00:19:13.763 }, 00:19:13.763 { 00:19:13.763 "name": "BaseBdev3", 00:19:13.763 "uuid": "efddd3a4-95aa-4633-bc31-b32617e16263", 00:19:13.763 "is_configured": true, 00:19:13.763 "data_offset": 0, 00:19:13.763 "data_size": 65536 00:19:13.763 } 00:19:13.763 ] 00:19:13.763 }' 00:19:13.763 00:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.763 00:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.330 00:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:14.330 00:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.330 00:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:14.330 00:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:14.589 [2024-07-25 00:45:37.159373] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.589 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.849 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.849 "name": "Existed_Raid", 00:19:14.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.849 "strip_size_kb": 64, 00:19:14.849 "state": "configuring", 00:19:14.849 "raid_level": "raid0", 00:19:14.849 "superblock": false, 00:19:14.849 "num_base_bdevs": 3, 00:19:14.849 "num_base_bdevs_discovered": 1, 00:19:14.849 "num_base_bdevs_operational": 3, 00:19:14.849 "base_bdevs_list": [ 00:19:14.849 { 00:19:14.849 "name": "BaseBdev1", 00:19:14.849 "uuid": "d6b943f4-90fd-49d9-950d-c4225b1477ee", 00:19:14.849 "is_configured": true, 00:19:14.849 "data_offset": 0, 00:19:14.849 "data_size": 65536 00:19:14.849 }, 00:19:14.849 { 00:19:14.849 "name": null, 00:19:14.849 "uuid": "52271cab-e78e-4cb7-9556-94e2b88236fc", 00:19:14.849 "is_configured": false, 00:19:14.849 "data_offset": 0, 00:19:14.849 "data_size": 65536 00:19:14.849 }, 00:19:14.849 { 00:19:14.849 "name": null, 00:19:14.849 "uuid": "efddd3a4-95aa-4633-bc31-b32617e16263", 00:19:14.849 "is_configured": false, 00:19:14.849 "data_offset": 0, 00:19:14.849 "data_size": 65536 00:19:14.849 } 00:19:14.849 ] 00:19:14.849 }' 00:19:14.849 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.849 00:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.417 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:15.417 00:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.677 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:15.677 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:15.936 [2024-07-25 00:45:38.423642] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.936 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.196 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.196 "name": "Existed_Raid", 00:19:16.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.196 "strip_size_kb": 64, 00:19:16.196 "state": "configuring", 00:19:16.196 "raid_level": "raid0", 00:19:16.196 "superblock": false, 00:19:16.196 "num_base_bdevs": 3, 00:19:16.196 "num_base_bdevs_discovered": 2, 00:19:16.196 "num_base_bdevs_operational": 3, 00:19:16.196 "base_bdevs_list": [ 00:19:16.196 { 00:19:16.196 "name": "BaseBdev1", 00:19:16.196 "uuid": "d6b943f4-90fd-49d9-950d-c4225b1477ee", 00:19:16.196 "is_configured": true, 00:19:16.196 "data_offset": 0, 00:19:16.196 "data_size": 65536 00:19:16.196 }, 00:19:16.196 { 00:19:16.196 "name": null, 00:19:16.196 "uuid": "52271cab-e78e-4cb7-9556-94e2b88236fc", 00:19:16.196 "is_configured": false, 00:19:16.196 "data_offset": 0, 00:19:16.196 "data_size": 65536 00:19:16.196 }, 00:19:16.196 { 00:19:16.196 "name": "BaseBdev3", 00:19:16.196 "uuid": "efddd3a4-95aa-4633-bc31-b32617e16263", 00:19:16.196 "is_configured": true, 00:19:16.196 "data_offset": 0, 00:19:16.196 "data_size": 65536 00:19:16.196 } 00:19:16.196 ] 00:19:16.196 }' 00:19:16.196 00:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.196 00:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.763 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:16.763 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.022 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:17.022 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:17.282 [2024-07-25 00:45:39.739933] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.282 00:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.541 00:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.541 "name": "Existed_Raid", 00:19:17.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.541 "strip_size_kb": 64, 00:19:17.541 "state": "configuring", 00:19:17.541 "raid_level": "raid0", 00:19:17.541 "superblock": false, 00:19:17.541 "num_base_bdevs": 3, 00:19:17.541 "num_base_bdevs_discovered": 1, 00:19:17.541 "num_base_bdevs_operational": 3, 00:19:17.541 "base_bdevs_list": [ 00:19:17.541 { 00:19:17.541 "name": null, 00:19:17.541 "uuid": "d6b943f4-90fd-49d9-950d-c4225b1477ee", 00:19:17.541 "is_configured": false, 00:19:17.541 "data_offset": 0, 00:19:17.541 "data_size": 65536 00:19:17.541 }, 00:19:17.541 { 00:19:17.541 "name": null, 00:19:17.541 "uuid": "52271cab-e78e-4cb7-9556-94e2b88236fc", 00:19:17.541 "is_configured": false, 00:19:17.541 "data_offset": 0, 00:19:17.541 "data_size": 65536 00:19:17.541 }, 00:19:17.541 { 00:19:17.541 "name": "BaseBdev3", 00:19:17.541 "uuid": "efddd3a4-95aa-4633-bc31-b32617e16263", 00:19:17.541 "is_configured": true, 00:19:17.541 "data_offset": 0, 00:19:17.541 "data_size": 65536 00:19:17.541 } 00:19:17.541 ] 00:19:17.541 }' 00:19:17.541 00:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.541 00:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.110 00:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:18.110 00:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.368 00:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:18.368 00:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:18.627 [2024-07-25 00:45:41.172546] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.627 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:18.627 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.627 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:18.627 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:18.627 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.627 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:18.628 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.628 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.628 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.628 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.628 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.628 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.886 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.886 "name": "Existed_Raid", 00:19:18.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.886 "strip_size_kb": 64, 00:19:18.886 "state": "configuring", 00:19:18.886 "raid_level": "raid0", 00:19:18.886 "superblock": false, 00:19:18.886 "num_base_bdevs": 3, 00:19:18.886 "num_base_bdevs_discovered": 2, 00:19:18.886 "num_base_bdevs_operational": 3, 00:19:18.886 "base_bdevs_list": [ 00:19:18.886 { 00:19:18.886 "name": null, 00:19:18.886 "uuid": "d6b943f4-90fd-49d9-950d-c4225b1477ee", 00:19:18.886 "is_configured": false, 00:19:18.886 "data_offset": 0, 00:19:18.886 "data_size": 65536 00:19:18.886 }, 00:19:18.886 { 00:19:18.886 "name": "BaseBdev2", 00:19:18.886 "uuid": "52271cab-e78e-4cb7-9556-94e2b88236fc", 00:19:18.886 "is_configured": true, 00:19:18.886 "data_offset": 0, 00:19:18.886 "data_size": 65536 00:19:18.886 }, 00:19:18.886 { 00:19:18.886 "name": "BaseBdev3", 00:19:18.886 "uuid": "efddd3a4-95aa-4633-bc31-b32617e16263", 00:19:18.886 "is_configured": true, 00:19:18.886 "data_offset": 0, 00:19:18.886 "data_size": 65536 00:19:18.886 } 00:19:18.886 ] 00:19:18.886 }' 00:19:18.887 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.887 00:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.454 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:19.454 00:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.454 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:19.454 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.454 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:19.713 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d6b943f4-90fd-49d9-950d-c4225b1477ee 00:19:19.972 [2024-07-25 00:45:42.500285] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:19.972 [2024-07-25 00:45:42.500483] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:19.972 [2024-07-25 00:45:42.500597] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:19.972 [2024-07-25 00:45:42.500760] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:19.972 [2024-07-25 00:45:42.501084] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:19.972 [2024-07-25 00:45:42.501206] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:19:19.972 [2024-07-25 00:45:42.501496] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.972 NewBaseBdev 00:19:19.972 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:19.972 00:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:19.972 00:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:19.972 00:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:19.972 00:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:19.972 00:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:19.972 00:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:20.231 00:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:20.490 [ 00:19:20.490 { 00:19:20.490 "name": "NewBaseBdev", 00:19:20.490 "aliases": [ 00:19:20.490 "d6b943f4-90fd-49d9-950d-c4225b1477ee" 00:19:20.490 ], 00:19:20.490 "product_name": "Malloc disk", 00:19:20.490 "block_size": 512, 00:19:20.490 "num_blocks": 65536, 00:19:20.490 "uuid": "d6b943f4-90fd-49d9-950d-c4225b1477ee", 00:19:20.490 "assigned_rate_limits": { 00:19:20.490 "rw_ios_per_sec": 0, 00:19:20.490 "rw_mbytes_per_sec": 0, 00:19:20.490 "r_mbytes_per_sec": 0, 00:19:20.490 "w_mbytes_per_sec": 0 00:19:20.490 }, 00:19:20.490 "claimed": true, 00:19:20.490 "claim_type": "exclusive_write", 00:19:20.490 "zoned": false, 00:19:20.490 "supported_io_types": { 00:19:20.490 "read": true, 00:19:20.490 "write": true, 00:19:20.490 "unmap": true, 00:19:20.490 "flush": true, 00:19:20.490 "reset": true, 00:19:20.490 "nvme_admin": false, 00:19:20.490 "nvme_io": false, 00:19:20.490 "nvme_io_md": false, 00:19:20.490 "write_zeroes": true, 00:19:20.490 "zcopy": true, 00:19:20.490 "get_zone_info": false, 00:19:20.490 "zone_management": false, 00:19:20.490 "zone_append": false, 00:19:20.490 "compare": false, 00:19:20.490 "compare_and_write": false, 00:19:20.490 "abort": true, 00:19:20.490 "seek_hole": false, 00:19:20.490 "seek_data": false, 00:19:20.490 "copy": true, 00:19:20.490 "nvme_iov_md": false 00:19:20.490 }, 00:19:20.490 "memory_domains": [ 00:19:20.490 { 00:19:20.490 "dma_device_id": "system", 00:19:20.490 "dma_device_type": 1 00:19:20.490 }, 00:19:20.490 { 00:19:20.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.490 "dma_device_type": 2 00:19:20.490 } 00:19:20.490 ], 00:19:20.490 "driver_specific": {} 00:19:20.490 } 00:19:20.490 ] 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.490 00:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.749 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:20.749 "name": "Existed_Raid", 00:19:20.749 "uuid": "308b670f-0ae7-42ac-9127-8d7478b243cd", 00:19:20.749 "strip_size_kb": 64, 00:19:20.749 "state": "online", 00:19:20.749 "raid_level": "raid0", 00:19:20.749 "superblock": false, 00:19:20.749 "num_base_bdevs": 3, 00:19:20.749 "num_base_bdevs_discovered": 3, 00:19:20.750 "num_base_bdevs_operational": 3, 00:19:20.750 "base_bdevs_list": [ 00:19:20.750 { 00:19:20.750 "name": "NewBaseBdev", 00:19:20.750 "uuid": "d6b943f4-90fd-49d9-950d-c4225b1477ee", 00:19:20.750 "is_configured": true, 00:19:20.750 "data_offset": 0, 00:19:20.750 "data_size": 65536 00:19:20.750 }, 00:19:20.750 { 00:19:20.750 "name": "BaseBdev2", 00:19:20.750 "uuid": "52271cab-e78e-4cb7-9556-94e2b88236fc", 00:19:20.750 "is_configured": true, 00:19:20.750 "data_offset": 0, 00:19:20.750 "data_size": 65536 00:19:20.750 }, 00:19:20.750 { 00:19:20.750 "name": "BaseBdev3", 00:19:20.750 "uuid": "efddd3a4-95aa-4633-bc31-b32617e16263", 00:19:20.750 "is_configured": true, 00:19:20.750 "data_offset": 0, 00:19:20.750 "data_size": 65536 00:19:20.750 } 00:19:20.750 ] 00:19:20.750 }' 00:19:20.750 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:20.750 00:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.316 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:21.316 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:21.316 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:21.317 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:21.317 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:21.317 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:21.317 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:21.317 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:21.317 [2024-07-25 00:45:43.856828] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.317 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:21.317 "name": "Existed_Raid", 00:19:21.317 "aliases": [ 00:19:21.317 "308b670f-0ae7-42ac-9127-8d7478b243cd" 00:19:21.317 ], 00:19:21.317 "product_name": "Raid Volume", 00:19:21.317 "block_size": 512, 00:19:21.317 "num_blocks": 196608, 00:19:21.317 "uuid": "308b670f-0ae7-42ac-9127-8d7478b243cd", 00:19:21.317 "assigned_rate_limits": { 00:19:21.317 "rw_ios_per_sec": 0, 00:19:21.317 "rw_mbytes_per_sec": 0, 00:19:21.317 "r_mbytes_per_sec": 0, 00:19:21.317 "w_mbytes_per_sec": 0 00:19:21.317 }, 00:19:21.317 "claimed": false, 00:19:21.317 "zoned": false, 00:19:21.317 "supported_io_types": { 00:19:21.317 "read": true, 00:19:21.317 "write": true, 00:19:21.317 "unmap": true, 00:19:21.317 "flush": true, 00:19:21.317 "reset": true, 00:19:21.317 "nvme_admin": false, 00:19:21.317 "nvme_io": false, 00:19:21.317 "nvme_io_md": false, 00:19:21.317 "write_zeroes": true, 00:19:21.317 "zcopy": false, 00:19:21.317 "get_zone_info": false, 00:19:21.317 "zone_management": false, 00:19:21.317 "zone_append": false, 00:19:21.317 "compare": false, 00:19:21.317 "compare_and_write": false, 00:19:21.317 "abort": false, 00:19:21.317 "seek_hole": false, 00:19:21.317 "seek_data": false, 00:19:21.317 "copy": false, 00:19:21.317 "nvme_iov_md": false 00:19:21.317 }, 00:19:21.317 "memory_domains": [ 00:19:21.317 { 00:19:21.317 "dma_device_id": "system", 00:19:21.317 "dma_device_type": 1 00:19:21.317 }, 00:19:21.317 { 00:19:21.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.317 "dma_device_type": 2 00:19:21.317 }, 00:19:21.317 { 00:19:21.317 "dma_device_id": "system", 00:19:21.317 "dma_device_type": 1 00:19:21.317 }, 00:19:21.317 { 00:19:21.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.317 "dma_device_type": 2 00:19:21.317 }, 00:19:21.317 { 00:19:21.317 "dma_device_id": "system", 00:19:21.317 "dma_device_type": 1 00:19:21.317 }, 00:19:21.317 { 00:19:21.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.317 "dma_device_type": 2 00:19:21.317 } 00:19:21.317 ], 00:19:21.317 "driver_specific": { 00:19:21.317 "raid": { 00:19:21.317 "uuid": "308b670f-0ae7-42ac-9127-8d7478b243cd", 00:19:21.317 "strip_size_kb": 64, 00:19:21.317 "state": "online", 00:19:21.317 "raid_level": "raid0", 00:19:21.317 "superblock": false, 00:19:21.317 "num_base_bdevs": 3, 00:19:21.317 "num_base_bdevs_discovered": 3, 00:19:21.317 "num_base_bdevs_operational": 3, 00:19:21.317 "base_bdevs_list": [ 00:19:21.317 { 00:19:21.317 "name": "NewBaseBdev", 00:19:21.317 "uuid": "d6b943f4-90fd-49d9-950d-c4225b1477ee", 00:19:21.317 "is_configured": true, 00:19:21.317 "data_offset": 0, 00:19:21.317 "data_size": 65536 00:19:21.317 }, 00:19:21.317 { 00:19:21.317 "name": "BaseBdev2", 00:19:21.317 "uuid": "52271cab-e78e-4cb7-9556-94e2b88236fc", 00:19:21.317 "is_configured": true, 00:19:21.317 "data_offset": 0, 00:19:21.317 "data_size": 65536 00:19:21.317 }, 00:19:21.317 { 00:19:21.317 "name": "BaseBdev3", 00:19:21.317 "uuid": "efddd3a4-95aa-4633-bc31-b32617e16263", 00:19:21.317 "is_configured": true, 00:19:21.317 "data_offset": 0, 00:19:21.317 "data_size": 65536 00:19:21.317 } 00:19:21.317 ] 00:19:21.317 } 00:19:21.317 } 00:19:21.317 }' 00:19:21.317 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:21.317 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:21.317 BaseBdev2 00:19:21.317 BaseBdev3' 00:19:21.317 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:21.317 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:21.317 00:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:21.576 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:21.576 "name": "NewBaseBdev", 00:19:21.576 "aliases": [ 00:19:21.576 "d6b943f4-90fd-49d9-950d-c4225b1477ee" 00:19:21.576 ], 00:19:21.576 "product_name": "Malloc disk", 00:19:21.576 "block_size": 512, 00:19:21.576 "num_blocks": 65536, 00:19:21.576 "uuid": "d6b943f4-90fd-49d9-950d-c4225b1477ee", 00:19:21.576 "assigned_rate_limits": { 00:19:21.576 "rw_ios_per_sec": 0, 00:19:21.576 "rw_mbytes_per_sec": 0, 00:19:21.576 "r_mbytes_per_sec": 0, 00:19:21.576 "w_mbytes_per_sec": 0 00:19:21.576 }, 00:19:21.576 "claimed": true, 00:19:21.576 "claim_type": "exclusive_write", 00:19:21.576 "zoned": false, 00:19:21.576 "supported_io_types": { 00:19:21.576 "read": true, 00:19:21.576 "write": true, 00:19:21.576 "unmap": true, 00:19:21.576 "flush": true, 00:19:21.576 "reset": true, 00:19:21.576 "nvme_admin": false, 00:19:21.576 "nvme_io": false, 00:19:21.576 "nvme_io_md": false, 00:19:21.576 "write_zeroes": true, 00:19:21.576 "zcopy": true, 00:19:21.576 "get_zone_info": false, 00:19:21.576 "zone_management": false, 00:19:21.576 "zone_append": false, 00:19:21.576 "compare": false, 00:19:21.576 "compare_and_write": false, 00:19:21.576 "abort": true, 00:19:21.576 "seek_hole": false, 00:19:21.576 "seek_data": false, 00:19:21.576 "copy": true, 00:19:21.576 "nvme_iov_md": false 00:19:21.576 }, 00:19:21.576 "memory_domains": [ 00:19:21.576 { 00:19:21.576 "dma_device_id": "system", 00:19:21.576 "dma_device_type": 1 00:19:21.576 }, 00:19:21.576 { 00:19:21.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.576 "dma_device_type": 2 00:19:21.576 } 00:19:21.576 ], 00:19:21.576 "driver_specific": {} 00:19:21.576 }' 00:19:21.576 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:21.576 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:21.576 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:21.576 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:21.835 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:21.835 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:21.835 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:21.835 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:21.835 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:21.835 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:21.835 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:21.835 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:21.835 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:21.835 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:21.836 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:22.405 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:22.405 "name": "BaseBdev2", 00:19:22.405 "aliases": [ 00:19:22.405 "52271cab-e78e-4cb7-9556-94e2b88236fc" 00:19:22.405 ], 00:19:22.405 "product_name": "Malloc disk", 00:19:22.405 "block_size": 512, 00:19:22.405 "num_blocks": 65536, 00:19:22.405 "uuid": "52271cab-e78e-4cb7-9556-94e2b88236fc", 00:19:22.405 "assigned_rate_limits": { 00:19:22.405 "rw_ios_per_sec": 0, 00:19:22.405 "rw_mbytes_per_sec": 0, 00:19:22.405 "r_mbytes_per_sec": 0, 00:19:22.405 "w_mbytes_per_sec": 0 00:19:22.405 }, 00:19:22.405 "claimed": true, 00:19:22.405 "claim_type": "exclusive_write", 00:19:22.405 "zoned": false, 00:19:22.405 "supported_io_types": { 00:19:22.405 "read": true, 00:19:22.405 "write": true, 00:19:22.405 "unmap": true, 00:19:22.405 "flush": true, 00:19:22.405 "reset": true, 00:19:22.405 "nvme_admin": false, 00:19:22.405 "nvme_io": false, 00:19:22.405 "nvme_io_md": false, 00:19:22.405 "write_zeroes": true, 00:19:22.405 "zcopy": true, 00:19:22.405 "get_zone_info": false, 00:19:22.405 "zone_management": false, 00:19:22.405 "zone_append": false, 00:19:22.405 "compare": false, 00:19:22.405 "compare_and_write": false, 00:19:22.405 "abort": true, 00:19:22.405 "seek_hole": false, 00:19:22.405 "seek_data": false, 00:19:22.405 "copy": true, 00:19:22.405 "nvme_iov_md": false 00:19:22.405 }, 00:19:22.405 "memory_domains": [ 00:19:22.405 { 00:19:22.405 "dma_device_id": "system", 00:19:22.405 "dma_device_type": 1 00:19:22.405 }, 00:19:22.405 { 00:19:22.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.405 "dma_device_type": 2 00:19:22.405 } 00:19:22.405 ], 00:19:22.405 "driver_specific": {} 00:19:22.405 }' 00:19:22.405 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.405 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.405 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:22.405 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.405 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.405 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:22.405 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.405 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.405 00:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:22.405 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.405 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.664 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:22.664 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:22.664 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:22.664 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:22.924 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:22.924 "name": "BaseBdev3", 00:19:22.924 "aliases": [ 00:19:22.924 "efddd3a4-95aa-4633-bc31-b32617e16263" 00:19:22.924 ], 00:19:22.924 "product_name": "Malloc disk", 00:19:22.924 "block_size": 512, 00:19:22.924 "num_blocks": 65536, 00:19:22.924 "uuid": "efddd3a4-95aa-4633-bc31-b32617e16263", 00:19:22.924 "assigned_rate_limits": { 00:19:22.924 "rw_ios_per_sec": 0, 00:19:22.924 "rw_mbytes_per_sec": 0, 00:19:22.924 "r_mbytes_per_sec": 0, 00:19:22.924 "w_mbytes_per_sec": 0 00:19:22.924 }, 00:19:22.924 "claimed": true, 00:19:22.924 "claim_type": "exclusive_write", 00:19:22.924 "zoned": false, 00:19:22.924 "supported_io_types": { 00:19:22.924 "read": true, 00:19:22.924 "write": true, 00:19:22.924 "unmap": true, 00:19:22.924 "flush": true, 00:19:22.924 "reset": true, 00:19:22.924 "nvme_admin": false, 00:19:22.924 "nvme_io": false, 00:19:22.924 "nvme_io_md": false, 00:19:22.924 "write_zeroes": true, 00:19:22.924 "zcopy": true, 00:19:22.924 "get_zone_info": false, 00:19:22.924 "zone_management": false, 00:19:22.924 "zone_append": false, 00:19:22.924 "compare": false, 00:19:22.924 "compare_and_write": false, 00:19:22.924 "abort": true, 00:19:22.924 "seek_hole": false, 00:19:22.924 "seek_data": false, 00:19:22.924 "copy": true, 00:19:22.924 "nvme_iov_md": false 00:19:22.924 }, 00:19:22.924 "memory_domains": [ 00:19:22.924 { 00:19:22.924 "dma_device_id": "system", 00:19:22.924 "dma_device_type": 1 00:19:22.924 }, 00:19:22.924 { 00:19:22.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.924 "dma_device_type": 2 00:19:22.924 } 00:19:22.924 ], 00:19:22.924 "driver_specific": {} 00:19:22.924 }' 00:19:22.924 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.924 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.924 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:22.924 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.924 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:23.183 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:23.184 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:23.184 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:23.184 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:23.184 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:23.184 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:23.184 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:23.184 00:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:23.443 [2024-07-25 00:45:46.020972] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:23.443 [2024-07-25 00:45:46.021168] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.443 [2024-07-25 00:45:46.021364] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.443 [2024-07-25 00:45:46.021532] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.443 [2024-07-25 00:45:46.021608] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:19:23.443 00:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 126277 00:19:23.443 00:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 126277 ']' 00:19:23.443 00:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 126277 00:19:23.443 00:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:19:23.443 00:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.443 00:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126277 00:19:23.443 00:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:23.443 00:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:23.443 00:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126277' 00:19:23.443 killing process with pid 126277 00:19:23.443 00:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 126277 00:19:23.443 [2024-07-25 00:45:46.071093] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.443 00:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 126277 00:19:24.012 [2024-07-25 00:45:46.371681] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.392 ************************************ 00:19:25.392 END TEST raid_state_function_test 00:19:25.392 ************************************ 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:25.392 00:19:25.392 real 0m28.057s 00:19:25.392 user 0m50.267s 00:19:25.392 sys 0m4.344s 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.392 00:45:47 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:19:25.392 00:45:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:25.392 00:45:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.392 00:45:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.392 ************************************ 00:19:25.392 START TEST raid_state_function_test_sb 00:19:25.392 ************************************ 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:19:25.392 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=127235 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 127235' 00:19:25.393 Process raid pid: 127235 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 127235 /var/tmp/spdk-raid.sock 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 127235 ']' 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:25.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.393 00:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.393 [2024-07-25 00:45:47.845183] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:19:25.393 [2024-07-25 00:45:47.845638] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.393 [2024-07-25 00:45:48.031846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.652 [2024-07-25 00:45:48.297906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.911 [2024-07-25 00:45:48.508798] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.171 00:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.171 00:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:19:26.171 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:26.431 [2024-07-25 00:45:48.963948] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:26.431 [2024-07-25 00:45:48.964286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:26.431 [2024-07-25 00:45:48.964388] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.431 [2024-07-25 00:45:48.964447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.431 [2024-07-25 00:45:48.964522] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:26.431 [2024-07-25 00:45:48.964565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.431 00:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.690 00:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:26.690 "name": "Existed_Raid", 00:19:26.690 "uuid": "647d93e9-6676-4aec-be0e-3aa8489c470c", 00:19:26.690 "strip_size_kb": 64, 00:19:26.690 "state": "configuring", 00:19:26.690 "raid_level": "raid0", 00:19:26.690 "superblock": true, 00:19:26.690 "num_base_bdevs": 3, 00:19:26.690 "num_base_bdevs_discovered": 0, 00:19:26.690 "num_base_bdevs_operational": 3, 00:19:26.690 "base_bdevs_list": [ 00:19:26.690 { 00:19:26.690 "name": "BaseBdev1", 00:19:26.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.690 "is_configured": false, 00:19:26.690 "data_offset": 0, 00:19:26.690 "data_size": 0 00:19:26.690 }, 00:19:26.690 { 00:19:26.690 "name": "BaseBdev2", 00:19:26.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.690 "is_configured": false, 00:19:26.690 "data_offset": 0, 00:19:26.690 "data_size": 0 00:19:26.690 }, 00:19:26.690 { 00:19:26.690 "name": "BaseBdev3", 00:19:26.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.690 "is_configured": false, 00:19:26.690 "data_offset": 0, 00:19:26.691 "data_size": 0 00:19:26.691 } 00:19:26.691 ] 00:19:26.691 }' 00:19:26.691 00:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:26.691 00:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.260 00:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:27.260 [2024-07-25 00:45:49.844006] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:27.260 [2024-07-25 00:45:49.844231] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:27.260 00:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:27.519 [2024-07-25 00:45:50.100058] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:27.519 [2024-07-25 00:45:50.100358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:27.519 [2024-07-25 00:45:50.100436] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:27.519 [2024-07-25 00:45:50.100485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:27.519 [2024-07-25 00:45:50.100511] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:27.519 [2024-07-25 00:45:50.100555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:27.519 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:27.778 [2024-07-25 00:45:50.373214] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.778 BaseBdev1 00:19:27.778 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:27.778 00:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:27.778 00:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:27.778 00:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:27.778 00:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:27.778 00:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:27.778 00:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:28.038 00:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:28.297 [ 00:19:28.297 { 00:19:28.297 "name": "BaseBdev1", 00:19:28.297 "aliases": [ 00:19:28.297 "2a6d03ab-fe9d-457e-a37f-b361f19368ff" 00:19:28.297 ], 00:19:28.297 "product_name": "Malloc disk", 00:19:28.297 "block_size": 512, 00:19:28.297 "num_blocks": 65536, 00:19:28.297 "uuid": "2a6d03ab-fe9d-457e-a37f-b361f19368ff", 00:19:28.297 "assigned_rate_limits": { 00:19:28.297 "rw_ios_per_sec": 0, 00:19:28.297 "rw_mbytes_per_sec": 0, 00:19:28.297 "r_mbytes_per_sec": 0, 00:19:28.297 "w_mbytes_per_sec": 0 00:19:28.297 }, 00:19:28.297 "claimed": true, 00:19:28.297 "claim_type": "exclusive_write", 00:19:28.297 "zoned": false, 00:19:28.297 "supported_io_types": { 00:19:28.297 "read": true, 00:19:28.297 "write": true, 00:19:28.297 "unmap": true, 00:19:28.297 "flush": true, 00:19:28.297 "reset": true, 00:19:28.297 "nvme_admin": false, 00:19:28.297 "nvme_io": false, 00:19:28.297 "nvme_io_md": false, 00:19:28.297 "write_zeroes": true, 00:19:28.297 "zcopy": true, 00:19:28.297 "get_zone_info": false, 00:19:28.297 "zone_management": false, 00:19:28.297 "zone_append": false, 00:19:28.297 "compare": false, 00:19:28.297 "compare_and_write": false, 00:19:28.297 "abort": true, 00:19:28.297 "seek_hole": false, 00:19:28.297 "seek_data": false, 00:19:28.297 "copy": true, 00:19:28.297 "nvme_iov_md": false 00:19:28.297 }, 00:19:28.297 "memory_domains": [ 00:19:28.297 { 00:19:28.297 "dma_device_id": "system", 00:19:28.297 "dma_device_type": 1 00:19:28.297 }, 00:19:28.297 { 00:19:28.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.297 "dma_device_type": 2 00:19:28.297 } 00:19:28.297 ], 00:19:28.297 "driver_specific": {} 00:19:28.297 } 00:19:28.297 ] 00:19:28.297 00:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:28.297 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:28.297 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.297 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.297 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:28.297 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:28.297 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:28.297 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.297 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.297 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.297 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.298 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.298 00:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.557 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:28.557 "name": "Existed_Raid", 00:19:28.557 "uuid": "0c782cbd-6247-40ca-8881-3f4dcc3c2942", 00:19:28.557 "strip_size_kb": 64, 00:19:28.557 "state": "configuring", 00:19:28.557 "raid_level": "raid0", 00:19:28.557 "superblock": true, 00:19:28.557 "num_base_bdevs": 3, 00:19:28.557 "num_base_bdevs_discovered": 1, 00:19:28.557 "num_base_bdevs_operational": 3, 00:19:28.557 "base_bdevs_list": [ 00:19:28.557 { 00:19:28.557 "name": "BaseBdev1", 00:19:28.557 "uuid": "2a6d03ab-fe9d-457e-a37f-b361f19368ff", 00:19:28.557 "is_configured": true, 00:19:28.557 "data_offset": 2048, 00:19:28.557 "data_size": 63488 00:19:28.557 }, 00:19:28.557 { 00:19:28.557 "name": "BaseBdev2", 00:19:28.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.557 "is_configured": false, 00:19:28.557 "data_offset": 0, 00:19:28.557 "data_size": 0 00:19:28.557 }, 00:19:28.557 { 00:19:28.557 "name": "BaseBdev3", 00:19:28.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.557 "is_configured": false, 00:19:28.557 "data_offset": 0, 00:19:28.557 "data_size": 0 00:19:28.557 } 00:19:28.557 ] 00:19:28.557 }' 00:19:28.557 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:28.557 00:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.125 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:29.125 [2024-07-25 00:45:51.689479] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.125 [2024-07-25 00:45:51.689725] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:29.125 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:29.385 [2024-07-25 00:45:51.857539] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.385 [2024-07-25 00:45:51.859604] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.385 [2024-07-25 00:45:51.859790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.385 [2024-07-25 00:45:51.859890] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:29.385 [2024-07-25 00:45:51.859965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.385 00:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.644 00:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.644 "name": "Existed_Raid", 00:19:29.644 "uuid": "f39b8d28-2e1f-4de7-942d-608b6252f0b9", 00:19:29.644 "strip_size_kb": 64, 00:19:29.644 "state": "configuring", 00:19:29.644 "raid_level": "raid0", 00:19:29.644 "superblock": true, 00:19:29.644 "num_base_bdevs": 3, 00:19:29.644 "num_base_bdevs_discovered": 1, 00:19:29.644 "num_base_bdevs_operational": 3, 00:19:29.644 "base_bdevs_list": [ 00:19:29.644 { 00:19:29.644 "name": "BaseBdev1", 00:19:29.644 "uuid": "2a6d03ab-fe9d-457e-a37f-b361f19368ff", 00:19:29.644 "is_configured": true, 00:19:29.644 "data_offset": 2048, 00:19:29.644 "data_size": 63488 00:19:29.644 }, 00:19:29.644 { 00:19:29.644 "name": "BaseBdev2", 00:19:29.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.644 "is_configured": false, 00:19:29.644 "data_offset": 0, 00:19:29.644 "data_size": 0 00:19:29.644 }, 00:19:29.644 { 00:19:29.644 "name": "BaseBdev3", 00:19:29.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.644 "is_configured": false, 00:19:29.644 "data_offset": 0, 00:19:29.644 "data_size": 0 00:19:29.644 } 00:19:29.644 ] 00:19:29.644 }' 00:19:29.645 00:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.645 00:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.213 00:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:30.213 [2024-07-25 00:45:52.844418] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:30.213 BaseBdev2 00:19:30.213 00:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:30.213 00:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:30.213 00:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:30.213 00:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:30.213 00:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:30.213 00:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:30.213 00:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.472 00:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:30.732 [ 00:19:30.732 { 00:19:30.732 "name": "BaseBdev2", 00:19:30.732 "aliases": [ 00:19:30.732 "d1add888-5bd9-4b16-8b0c-d6398a559dcd" 00:19:30.732 ], 00:19:30.732 "product_name": "Malloc disk", 00:19:30.732 "block_size": 512, 00:19:30.732 "num_blocks": 65536, 00:19:30.732 "uuid": "d1add888-5bd9-4b16-8b0c-d6398a559dcd", 00:19:30.732 "assigned_rate_limits": { 00:19:30.732 "rw_ios_per_sec": 0, 00:19:30.732 "rw_mbytes_per_sec": 0, 00:19:30.732 "r_mbytes_per_sec": 0, 00:19:30.732 "w_mbytes_per_sec": 0 00:19:30.732 }, 00:19:30.732 "claimed": true, 00:19:30.732 "claim_type": "exclusive_write", 00:19:30.732 "zoned": false, 00:19:30.732 "supported_io_types": { 00:19:30.732 "read": true, 00:19:30.732 "write": true, 00:19:30.732 "unmap": true, 00:19:30.732 "flush": true, 00:19:30.732 "reset": true, 00:19:30.732 "nvme_admin": false, 00:19:30.732 "nvme_io": false, 00:19:30.732 "nvme_io_md": false, 00:19:30.732 "write_zeroes": true, 00:19:30.732 "zcopy": true, 00:19:30.732 "get_zone_info": false, 00:19:30.732 "zone_management": false, 00:19:30.732 "zone_append": false, 00:19:30.732 "compare": false, 00:19:30.732 "compare_and_write": false, 00:19:30.732 "abort": true, 00:19:30.732 "seek_hole": false, 00:19:30.732 "seek_data": false, 00:19:30.732 "copy": true, 00:19:30.732 "nvme_iov_md": false 00:19:30.732 }, 00:19:30.732 "memory_domains": [ 00:19:30.732 { 00:19:30.732 "dma_device_id": "system", 00:19:30.732 "dma_device_type": 1 00:19:30.732 }, 00:19:30.732 { 00:19:30.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.732 "dma_device_type": 2 00:19:30.732 } 00:19:30.732 ], 00:19:30.732 "driver_specific": {} 00:19:30.732 } 00:19:30.732 ] 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.732 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.991 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.991 "name": "Existed_Raid", 00:19:30.991 "uuid": "f39b8d28-2e1f-4de7-942d-608b6252f0b9", 00:19:30.991 "strip_size_kb": 64, 00:19:30.991 "state": "configuring", 00:19:30.991 "raid_level": "raid0", 00:19:30.991 "superblock": true, 00:19:30.991 "num_base_bdevs": 3, 00:19:30.991 "num_base_bdevs_discovered": 2, 00:19:30.991 "num_base_bdevs_operational": 3, 00:19:30.991 "base_bdevs_list": [ 00:19:30.991 { 00:19:30.991 "name": "BaseBdev1", 00:19:30.991 "uuid": "2a6d03ab-fe9d-457e-a37f-b361f19368ff", 00:19:30.991 "is_configured": true, 00:19:30.991 "data_offset": 2048, 00:19:30.991 "data_size": 63488 00:19:30.991 }, 00:19:30.991 { 00:19:30.991 "name": "BaseBdev2", 00:19:30.991 "uuid": "d1add888-5bd9-4b16-8b0c-d6398a559dcd", 00:19:30.991 "is_configured": true, 00:19:30.991 "data_offset": 2048, 00:19:30.991 "data_size": 63488 00:19:30.991 }, 00:19:30.991 { 00:19:30.991 "name": "BaseBdev3", 00:19:30.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.991 "is_configured": false, 00:19:30.991 "data_offset": 0, 00:19:30.991 "data_size": 0 00:19:30.991 } 00:19:30.991 ] 00:19:30.991 }' 00:19:30.991 00:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.991 00:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.558 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:31.817 [2024-07-25 00:45:54.391532] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.817 [2024-07-25 00:45:54.391917] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:31.817 [2024-07-25 00:45:54.392035] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:31.817 [2024-07-25 00:45:54.392178] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:31.817 [2024-07-25 00:45:54.392508] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:31.817 [2024-07-25 00:45:54.392619] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:31.817 [2024-07-25 00:45:54.392845] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.817 BaseBdev3 00:19:31.817 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:31.817 00:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:31.817 00:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:31.817 00:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:31.817 00:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:31.817 00:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:31.817 00:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:32.075 00:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:32.334 [ 00:19:32.334 { 00:19:32.334 "name": "BaseBdev3", 00:19:32.334 "aliases": [ 00:19:32.334 "593999dd-f993-4830-8d01-a3b2a1fbf789" 00:19:32.334 ], 00:19:32.334 "product_name": "Malloc disk", 00:19:32.334 "block_size": 512, 00:19:32.334 "num_blocks": 65536, 00:19:32.334 "uuid": "593999dd-f993-4830-8d01-a3b2a1fbf789", 00:19:32.334 "assigned_rate_limits": { 00:19:32.334 "rw_ios_per_sec": 0, 00:19:32.334 "rw_mbytes_per_sec": 0, 00:19:32.334 "r_mbytes_per_sec": 0, 00:19:32.334 "w_mbytes_per_sec": 0 00:19:32.334 }, 00:19:32.334 "claimed": true, 00:19:32.334 "claim_type": "exclusive_write", 00:19:32.334 "zoned": false, 00:19:32.334 "supported_io_types": { 00:19:32.334 "read": true, 00:19:32.334 "write": true, 00:19:32.334 "unmap": true, 00:19:32.334 "flush": true, 00:19:32.334 "reset": true, 00:19:32.334 "nvme_admin": false, 00:19:32.334 "nvme_io": false, 00:19:32.334 "nvme_io_md": false, 00:19:32.334 "write_zeroes": true, 00:19:32.334 "zcopy": true, 00:19:32.334 "get_zone_info": false, 00:19:32.334 "zone_management": false, 00:19:32.334 "zone_append": false, 00:19:32.334 "compare": false, 00:19:32.334 "compare_and_write": false, 00:19:32.334 "abort": true, 00:19:32.334 "seek_hole": false, 00:19:32.334 "seek_data": false, 00:19:32.334 "copy": true, 00:19:32.334 "nvme_iov_md": false 00:19:32.334 }, 00:19:32.334 "memory_domains": [ 00:19:32.334 { 00:19:32.334 "dma_device_id": "system", 00:19:32.334 "dma_device_type": 1 00:19:32.334 }, 00:19:32.334 { 00:19:32.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.334 "dma_device_type": 2 00:19:32.334 } 00:19:32.334 ], 00:19:32.334 "driver_specific": {} 00:19:32.334 } 00:19:32.334 ] 00:19:32.334 00:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:32.334 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:32.334 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:32.334 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:32.334 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:32.334 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:32.335 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:32.335 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:32.335 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:32.335 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.335 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.335 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.335 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.335 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.335 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.594 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.594 "name": "Existed_Raid", 00:19:32.594 "uuid": "f39b8d28-2e1f-4de7-942d-608b6252f0b9", 00:19:32.594 "strip_size_kb": 64, 00:19:32.594 "state": "online", 00:19:32.594 "raid_level": "raid0", 00:19:32.594 "superblock": true, 00:19:32.594 "num_base_bdevs": 3, 00:19:32.594 "num_base_bdevs_discovered": 3, 00:19:32.594 "num_base_bdevs_operational": 3, 00:19:32.594 "base_bdevs_list": [ 00:19:32.594 { 00:19:32.594 "name": "BaseBdev1", 00:19:32.594 "uuid": "2a6d03ab-fe9d-457e-a37f-b361f19368ff", 00:19:32.594 "is_configured": true, 00:19:32.594 "data_offset": 2048, 00:19:32.594 "data_size": 63488 00:19:32.594 }, 00:19:32.594 { 00:19:32.594 "name": "BaseBdev2", 00:19:32.594 "uuid": "d1add888-5bd9-4b16-8b0c-d6398a559dcd", 00:19:32.594 "is_configured": true, 00:19:32.594 "data_offset": 2048, 00:19:32.594 "data_size": 63488 00:19:32.594 }, 00:19:32.594 { 00:19:32.594 "name": "BaseBdev3", 00:19:32.594 "uuid": "593999dd-f993-4830-8d01-a3b2a1fbf789", 00:19:32.594 "is_configured": true, 00:19:32.594 "data_offset": 2048, 00:19:32.594 "data_size": 63488 00:19:32.594 } 00:19:32.594 ] 00:19:32.594 }' 00:19:32.594 00:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.594 00:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.162 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:33.162 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:33.162 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:33.162 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:33.162 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:33.162 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:33.162 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:33.162 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:33.162 [2024-07-25 00:45:55.752093] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.162 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:33.162 "name": "Existed_Raid", 00:19:33.162 "aliases": [ 00:19:33.162 "f39b8d28-2e1f-4de7-942d-608b6252f0b9" 00:19:33.162 ], 00:19:33.162 "product_name": "Raid Volume", 00:19:33.162 "block_size": 512, 00:19:33.162 "num_blocks": 190464, 00:19:33.162 "uuid": "f39b8d28-2e1f-4de7-942d-608b6252f0b9", 00:19:33.162 "assigned_rate_limits": { 00:19:33.162 "rw_ios_per_sec": 0, 00:19:33.162 "rw_mbytes_per_sec": 0, 00:19:33.162 "r_mbytes_per_sec": 0, 00:19:33.162 "w_mbytes_per_sec": 0 00:19:33.162 }, 00:19:33.162 "claimed": false, 00:19:33.162 "zoned": false, 00:19:33.162 "supported_io_types": { 00:19:33.162 "read": true, 00:19:33.162 "write": true, 00:19:33.162 "unmap": true, 00:19:33.162 "flush": true, 00:19:33.162 "reset": true, 00:19:33.162 "nvme_admin": false, 00:19:33.162 "nvme_io": false, 00:19:33.162 "nvme_io_md": false, 00:19:33.162 "write_zeroes": true, 00:19:33.162 "zcopy": false, 00:19:33.162 "get_zone_info": false, 00:19:33.162 "zone_management": false, 00:19:33.162 "zone_append": false, 00:19:33.162 "compare": false, 00:19:33.162 "compare_and_write": false, 00:19:33.162 "abort": false, 00:19:33.162 "seek_hole": false, 00:19:33.162 "seek_data": false, 00:19:33.162 "copy": false, 00:19:33.162 "nvme_iov_md": false 00:19:33.162 }, 00:19:33.162 "memory_domains": [ 00:19:33.162 { 00:19:33.162 "dma_device_id": "system", 00:19:33.162 "dma_device_type": 1 00:19:33.162 }, 00:19:33.162 { 00:19:33.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.162 "dma_device_type": 2 00:19:33.162 }, 00:19:33.162 { 00:19:33.162 "dma_device_id": "system", 00:19:33.162 "dma_device_type": 1 00:19:33.162 }, 00:19:33.162 { 00:19:33.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.162 "dma_device_type": 2 00:19:33.162 }, 00:19:33.162 { 00:19:33.162 "dma_device_id": "system", 00:19:33.162 "dma_device_type": 1 00:19:33.162 }, 00:19:33.162 { 00:19:33.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.162 "dma_device_type": 2 00:19:33.162 } 00:19:33.162 ], 00:19:33.162 "driver_specific": { 00:19:33.162 "raid": { 00:19:33.162 "uuid": "f39b8d28-2e1f-4de7-942d-608b6252f0b9", 00:19:33.162 "strip_size_kb": 64, 00:19:33.162 "state": "online", 00:19:33.162 "raid_level": "raid0", 00:19:33.162 "superblock": true, 00:19:33.162 "num_base_bdevs": 3, 00:19:33.162 "num_base_bdevs_discovered": 3, 00:19:33.162 "num_base_bdevs_operational": 3, 00:19:33.162 "base_bdevs_list": [ 00:19:33.162 { 00:19:33.162 "name": "BaseBdev1", 00:19:33.162 "uuid": "2a6d03ab-fe9d-457e-a37f-b361f19368ff", 00:19:33.162 "is_configured": true, 00:19:33.162 "data_offset": 2048, 00:19:33.162 "data_size": 63488 00:19:33.162 }, 00:19:33.162 { 00:19:33.162 "name": "BaseBdev2", 00:19:33.163 "uuid": "d1add888-5bd9-4b16-8b0c-d6398a559dcd", 00:19:33.163 "is_configured": true, 00:19:33.163 "data_offset": 2048, 00:19:33.163 "data_size": 63488 00:19:33.163 }, 00:19:33.163 { 00:19:33.163 "name": "BaseBdev3", 00:19:33.163 "uuid": "593999dd-f993-4830-8d01-a3b2a1fbf789", 00:19:33.163 "is_configured": true, 00:19:33.163 "data_offset": 2048, 00:19:33.163 "data_size": 63488 00:19:33.163 } 00:19:33.163 ] 00:19:33.163 } 00:19:33.163 } 00:19:33.163 }' 00:19:33.163 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:33.421 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:33.421 BaseBdev2 00:19:33.421 BaseBdev3' 00:19:33.421 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:33.421 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:33.421 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:33.421 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:33.421 "name": "BaseBdev1", 00:19:33.421 "aliases": [ 00:19:33.421 "2a6d03ab-fe9d-457e-a37f-b361f19368ff" 00:19:33.421 ], 00:19:33.421 "product_name": "Malloc disk", 00:19:33.421 "block_size": 512, 00:19:33.421 "num_blocks": 65536, 00:19:33.421 "uuid": "2a6d03ab-fe9d-457e-a37f-b361f19368ff", 00:19:33.421 "assigned_rate_limits": { 00:19:33.421 "rw_ios_per_sec": 0, 00:19:33.421 "rw_mbytes_per_sec": 0, 00:19:33.421 "r_mbytes_per_sec": 0, 00:19:33.421 "w_mbytes_per_sec": 0 00:19:33.421 }, 00:19:33.421 "claimed": true, 00:19:33.421 "claim_type": "exclusive_write", 00:19:33.421 "zoned": false, 00:19:33.421 "supported_io_types": { 00:19:33.421 "read": true, 00:19:33.421 "write": true, 00:19:33.421 "unmap": true, 00:19:33.421 "flush": true, 00:19:33.421 "reset": true, 00:19:33.421 "nvme_admin": false, 00:19:33.421 "nvme_io": false, 00:19:33.421 "nvme_io_md": false, 00:19:33.421 "write_zeroes": true, 00:19:33.421 "zcopy": true, 00:19:33.421 "get_zone_info": false, 00:19:33.421 "zone_management": false, 00:19:33.421 "zone_append": false, 00:19:33.421 "compare": false, 00:19:33.421 "compare_and_write": false, 00:19:33.421 "abort": true, 00:19:33.421 "seek_hole": false, 00:19:33.421 "seek_data": false, 00:19:33.421 "copy": true, 00:19:33.421 "nvme_iov_md": false 00:19:33.421 }, 00:19:33.421 "memory_domains": [ 00:19:33.421 { 00:19:33.421 "dma_device_id": "system", 00:19:33.421 "dma_device_type": 1 00:19:33.421 }, 00:19:33.421 { 00:19:33.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.421 "dma_device_type": 2 00:19:33.421 } 00:19:33.421 ], 00:19:33.421 "driver_specific": {} 00:19:33.421 }' 00:19:33.421 00:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.421 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:33.680 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:33.938 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:33.938 "name": "BaseBdev2", 00:19:33.938 "aliases": [ 00:19:33.938 "d1add888-5bd9-4b16-8b0c-d6398a559dcd" 00:19:33.938 ], 00:19:33.938 "product_name": "Malloc disk", 00:19:33.938 "block_size": 512, 00:19:33.938 "num_blocks": 65536, 00:19:33.938 "uuid": "d1add888-5bd9-4b16-8b0c-d6398a559dcd", 00:19:33.938 "assigned_rate_limits": { 00:19:33.938 "rw_ios_per_sec": 0, 00:19:33.938 "rw_mbytes_per_sec": 0, 00:19:33.938 "r_mbytes_per_sec": 0, 00:19:33.938 "w_mbytes_per_sec": 0 00:19:33.938 }, 00:19:33.938 "claimed": true, 00:19:33.938 "claim_type": "exclusive_write", 00:19:33.939 "zoned": false, 00:19:33.939 "supported_io_types": { 00:19:33.939 "read": true, 00:19:33.939 "write": true, 00:19:33.939 "unmap": true, 00:19:33.939 "flush": true, 00:19:33.939 "reset": true, 00:19:33.939 "nvme_admin": false, 00:19:33.939 "nvme_io": false, 00:19:33.939 "nvme_io_md": false, 00:19:33.939 "write_zeroes": true, 00:19:33.939 "zcopy": true, 00:19:33.939 "get_zone_info": false, 00:19:33.939 "zone_management": false, 00:19:33.939 "zone_append": false, 00:19:33.939 "compare": false, 00:19:33.939 "compare_and_write": false, 00:19:33.939 "abort": true, 00:19:33.939 "seek_hole": false, 00:19:33.939 "seek_data": false, 00:19:33.939 "copy": true, 00:19:33.939 "nvme_iov_md": false 00:19:33.939 }, 00:19:33.939 "memory_domains": [ 00:19:33.939 { 00:19:33.939 "dma_device_id": "system", 00:19:33.939 "dma_device_type": 1 00:19:33.939 }, 00:19:33.939 { 00:19:33.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.939 "dma_device_type": 2 00:19:33.939 } 00:19:33.939 ], 00:19:33.939 "driver_specific": {} 00:19:33.939 }' 00:19:33.939 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.939 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.939 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:33.939 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:34.197 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:34.197 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:34.197 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.197 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.197 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:34.197 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.197 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.197 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:34.456 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:34.456 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:34.456 00:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:34.716 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:34.716 "name": "BaseBdev3", 00:19:34.716 "aliases": [ 00:19:34.716 "593999dd-f993-4830-8d01-a3b2a1fbf789" 00:19:34.716 ], 00:19:34.716 "product_name": "Malloc disk", 00:19:34.716 "block_size": 512, 00:19:34.716 "num_blocks": 65536, 00:19:34.716 "uuid": "593999dd-f993-4830-8d01-a3b2a1fbf789", 00:19:34.716 "assigned_rate_limits": { 00:19:34.716 "rw_ios_per_sec": 0, 00:19:34.716 "rw_mbytes_per_sec": 0, 00:19:34.716 "r_mbytes_per_sec": 0, 00:19:34.716 "w_mbytes_per_sec": 0 00:19:34.716 }, 00:19:34.716 "claimed": true, 00:19:34.716 "claim_type": "exclusive_write", 00:19:34.716 "zoned": false, 00:19:34.716 "supported_io_types": { 00:19:34.716 "read": true, 00:19:34.716 "write": true, 00:19:34.716 "unmap": true, 00:19:34.716 "flush": true, 00:19:34.716 "reset": true, 00:19:34.716 "nvme_admin": false, 00:19:34.716 "nvme_io": false, 00:19:34.716 "nvme_io_md": false, 00:19:34.716 "write_zeroes": true, 00:19:34.716 "zcopy": true, 00:19:34.716 "get_zone_info": false, 00:19:34.716 "zone_management": false, 00:19:34.716 "zone_append": false, 00:19:34.716 "compare": false, 00:19:34.716 "compare_and_write": false, 00:19:34.716 "abort": true, 00:19:34.716 "seek_hole": false, 00:19:34.716 "seek_data": false, 00:19:34.716 "copy": true, 00:19:34.716 "nvme_iov_md": false 00:19:34.716 }, 00:19:34.716 "memory_domains": [ 00:19:34.716 { 00:19:34.716 "dma_device_id": "system", 00:19:34.716 "dma_device_type": 1 00:19:34.716 }, 00:19:34.716 { 00:19:34.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.716 "dma_device_type": 2 00:19:34.716 } 00:19:34.716 ], 00:19:34.716 "driver_specific": {} 00:19:34.716 }' 00:19:34.716 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:34.716 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:34.716 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:34.716 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:34.716 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:34.716 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:34.716 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.716 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.716 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:34.716 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.975 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.975 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:34.975 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:35.233 [2024-07-25 00:45:57.676249] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:35.233 [2024-07-25 00:45:57.676500] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:35.233 [2024-07-25 00:45:57.676681] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:35.233 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:35.233 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:19:35.233 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:35.233 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:35.233 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:35.233 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:35.234 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:35.234 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:35.234 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:35.234 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:35.234 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:35.234 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:35.234 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:35.234 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:35.234 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:35.234 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.234 00:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.492 00:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:35.492 "name": "Existed_Raid", 00:19:35.492 "uuid": "f39b8d28-2e1f-4de7-942d-608b6252f0b9", 00:19:35.492 "strip_size_kb": 64, 00:19:35.492 "state": "offline", 00:19:35.492 "raid_level": "raid0", 00:19:35.492 "superblock": true, 00:19:35.492 "num_base_bdevs": 3, 00:19:35.492 "num_base_bdevs_discovered": 2, 00:19:35.492 "num_base_bdevs_operational": 2, 00:19:35.492 "base_bdevs_list": [ 00:19:35.492 { 00:19:35.492 "name": null, 00:19:35.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.492 "is_configured": false, 00:19:35.492 "data_offset": 2048, 00:19:35.492 "data_size": 63488 00:19:35.492 }, 00:19:35.492 { 00:19:35.492 "name": "BaseBdev2", 00:19:35.492 "uuid": "d1add888-5bd9-4b16-8b0c-d6398a559dcd", 00:19:35.492 "is_configured": true, 00:19:35.492 "data_offset": 2048, 00:19:35.492 "data_size": 63488 00:19:35.492 }, 00:19:35.492 { 00:19:35.492 "name": "BaseBdev3", 00:19:35.492 "uuid": "593999dd-f993-4830-8d01-a3b2a1fbf789", 00:19:35.492 "is_configured": true, 00:19:35.492 "data_offset": 2048, 00:19:35.492 "data_size": 63488 00:19:35.492 } 00:19:35.492 ] 00:19:35.492 }' 00:19:35.492 00:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:35.492 00:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.060 00:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:36.060 00:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:36.061 00:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:36.061 00:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.320 00:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:36.320 00:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:36.320 00:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:36.580 [2024-07-25 00:45:59.163025] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:36.878 00:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:36.878 00:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:36.878 00:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.878 00:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:36.878 00:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:36.878 00:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:36.878 00:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:37.135 [2024-07-25 00:45:59.691564] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:37.135 [2024-07-25 00:45:59.691764] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:37.394 00:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:37.394 00:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:37.394 00:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.394 00:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:37.652 BaseBdev2 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:37.652 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:37.910 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:38.169 [ 00:19:38.169 { 00:19:38.169 "name": "BaseBdev2", 00:19:38.169 "aliases": [ 00:19:38.169 "45ead237-8c38-4c1c-a980-450dde8ac1a6" 00:19:38.169 ], 00:19:38.169 "product_name": "Malloc disk", 00:19:38.169 "block_size": 512, 00:19:38.169 "num_blocks": 65536, 00:19:38.169 "uuid": "45ead237-8c38-4c1c-a980-450dde8ac1a6", 00:19:38.169 "assigned_rate_limits": { 00:19:38.169 "rw_ios_per_sec": 0, 00:19:38.169 "rw_mbytes_per_sec": 0, 00:19:38.169 "r_mbytes_per_sec": 0, 00:19:38.169 "w_mbytes_per_sec": 0 00:19:38.169 }, 00:19:38.169 "claimed": false, 00:19:38.169 "zoned": false, 00:19:38.169 "supported_io_types": { 00:19:38.169 "read": true, 00:19:38.169 "write": true, 00:19:38.169 "unmap": true, 00:19:38.169 "flush": true, 00:19:38.169 "reset": true, 00:19:38.169 "nvme_admin": false, 00:19:38.169 "nvme_io": false, 00:19:38.169 "nvme_io_md": false, 00:19:38.169 "write_zeroes": true, 00:19:38.169 "zcopy": true, 00:19:38.169 "get_zone_info": false, 00:19:38.169 "zone_management": false, 00:19:38.169 "zone_append": false, 00:19:38.169 "compare": false, 00:19:38.169 "compare_and_write": false, 00:19:38.169 "abort": true, 00:19:38.169 "seek_hole": false, 00:19:38.169 "seek_data": false, 00:19:38.169 "copy": true, 00:19:38.169 "nvme_iov_md": false 00:19:38.169 }, 00:19:38.169 "memory_domains": [ 00:19:38.169 { 00:19:38.169 "dma_device_id": "system", 00:19:38.169 "dma_device_type": 1 00:19:38.169 }, 00:19:38.169 { 00:19:38.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.169 "dma_device_type": 2 00:19:38.169 } 00:19:38.169 ], 00:19:38.169 "driver_specific": {} 00:19:38.169 } 00:19:38.169 ] 00:19:38.169 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:38.169 00:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:38.169 00:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:38.169 00:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:38.427 BaseBdev3 00:19:38.427 00:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:38.427 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:38.427 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:38.427 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:38.427 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:38.427 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:38.427 00:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:38.427 00:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:38.686 [ 00:19:38.686 { 00:19:38.686 "name": "BaseBdev3", 00:19:38.686 "aliases": [ 00:19:38.686 "39322f98-e8e1-4126-8e63-0a6df1ea7d07" 00:19:38.686 ], 00:19:38.686 "product_name": "Malloc disk", 00:19:38.686 "block_size": 512, 00:19:38.686 "num_blocks": 65536, 00:19:38.686 "uuid": "39322f98-e8e1-4126-8e63-0a6df1ea7d07", 00:19:38.686 "assigned_rate_limits": { 00:19:38.686 "rw_ios_per_sec": 0, 00:19:38.686 "rw_mbytes_per_sec": 0, 00:19:38.686 "r_mbytes_per_sec": 0, 00:19:38.686 "w_mbytes_per_sec": 0 00:19:38.686 }, 00:19:38.686 "claimed": false, 00:19:38.686 "zoned": false, 00:19:38.686 "supported_io_types": { 00:19:38.686 "read": true, 00:19:38.686 "write": true, 00:19:38.686 "unmap": true, 00:19:38.686 "flush": true, 00:19:38.686 "reset": true, 00:19:38.686 "nvme_admin": false, 00:19:38.686 "nvme_io": false, 00:19:38.686 "nvme_io_md": false, 00:19:38.686 "write_zeroes": true, 00:19:38.686 "zcopy": true, 00:19:38.686 "get_zone_info": false, 00:19:38.686 "zone_management": false, 00:19:38.686 "zone_append": false, 00:19:38.686 "compare": false, 00:19:38.686 "compare_and_write": false, 00:19:38.686 "abort": true, 00:19:38.686 "seek_hole": false, 00:19:38.686 "seek_data": false, 00:19:38.686 "copy": true, 00:19:38.686 "nvme_iov_md": false 00:19:38.686 }, 00:19:38.686 "memory_domains": [ 00:19:38.686 { 00:19:38.686 "dma_device_id": "system", 00:19:38.686 "dma_device_type": 1 00:19:38.686 }, 00:19:38.686 { 00:19:38.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.686 "dma_device_type": 2 00:19:38.686 } 00:19:38.686 ], 00:19:38.686 "driver_specific": {} 00:19:38.686 } 00:19:38.686 ] 00:19:38.686 00:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:38.686 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:38.686 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:38.686 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:38.945 [2024-07-25 00:46:01.401906] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:38.945 [2024-07-25 00:46:01.402113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:38.945 [2024-07-25 00:46:01.402243] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:38.945 [2024-07-25 00:46:01.404204] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.945 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.203 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.203 "name": "Existed_Raid", 00:19:39.203 "uuid": "add98bcc-efea-46d1-bf1d-5ab59245d54f", 00:19:39.203 "strip_size_kb": 64, 00:19:39.203 "state": "configuring", 00:19:39.203 "raid_level": "raid0", 00:19:39.203 "superblock": true, 00:19:39.203 "num_base_bdevs": 3, 00:19:39.203 "num_base_bdevs_discovered": 2, 00:19:39.203 "num_base_bdevs_operational": 3, 00:19:39.203 "base_bdevs_list": [ 00:19:39.203 { 00:19:39.203 "name": "BaseBdev1", 00:19:39.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.203 "is_configured": false, 00:19:39.203 "data_offset": 0, 00:19:39.203 "data_size": 0 00:19:39.203 }, 00:19:39.203 { 00:19:39.203 "name": "BaseBdev2", 00:19:39.203 "uuid": "45ead237-8c38-4c1c-a980-450dde8ac1a6", 00:19:39.203 "is_configured": true, 00:19:39.203 "data_offset": 2048, 00:19:39.203 "data_size": 63488 00:19:39.203 }, 00:19:39.203 { 00:19:39.203 "name": "BaseBdev3", 00:19:39.203 "uuid": "39322f98-e8e1-4126-8e63-0a6df1ea7d07", 00:19:39.203 "is_configured": true, 00:19:39.203 "data_offset": 2048, 00:19:39.203 "data_size": 63488 00:19:39.203 } 00:19:39.203 ] 00:19:39.203 }' 00:19:39.203 00:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.203 00:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.770 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:39.770 [2024-07-25 00:46:02.402072] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:39.770 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:39.770 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:39.770 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:39.770 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:39.770 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:39.770 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:39.770 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.770 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.770 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.770 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:40.030 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.030 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.030 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.030 "name": "Existed_Raid", 00:19:40.030 "uuid": "add98bcc-efea-46d1-bf1d-5ab59245d54f", 00:19:40.030 "strip_size_kb": 64, 00:19:40.030 "state": "configuring", 00:19:40.030 "raid_level": "raid0", 00:19:40.030 "superblock": true, 00:19:40.030 "num_base_bdevs": 3, 00:19:40.030 "num_base_bdevs_discovered": 1, 00:19:40.030 "num_base_bdevs_operational": 3, 00:19:40.030 "base_bdevs_list": [ 00:19:40.030 { 00:19:40.030 "name": "BaseBdev1", 00:19:40.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.030 "is_configured": false, 00:19:40.030 "data_offset": 0, 00:19:40.030 "data_size": 0 00:19:40.030 }, 00:19:40.030 { 00:19:40.030 "name": null, 00:19:40.030 "uuid": "45ead237-8c38-4c1c-a980-450dde8ac1a6", 00:19:40.030 "is_configured": false, 00:19:40.030 "data_offset": 2048, 00:19:40.030 "data_size": 63488 00:19:40.030 }, 00:19:40.030 { 00:19:40.030 "name": "BaseBdev3", 00:19:40.030 "uuid": "39322f98-e8e1-4126-8e63-0a6df1ea7d07", 00:19:40.030 "is_configured": true, 00:19:40.030 "data_offset": 2048, 00:19:40.030 "data_size": 63488 00:19:40.030 } 00:19:40.030 ] 00:19:40.030 }' 00:19:40.030 00:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.030 00:46:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.599 00:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.599 00:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:40.857 00:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:40.857 00:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:41.115 [2024-07-25 00:46:03.601941] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.115 BaseBdev1 00:19:41.115 00:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:41.115 00:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:41.115 00:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:41.115 00:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:41.115 00:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:41.115 00:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:41.115 00:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:41.372 00:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:41.631 [ 00:19:41.631 { 00:19:41.631 "name": "BaseBdev1", 00:19:41.631 "aliases": [ 00:19:41.631 "bc51166f-9cdd-4c32-a2b4-25d77bcebc94" 00:19:41.631 ], 00:19:41.631 "product_name": "Malloc disk", 00:19:41.631 "block_size": 512, 00:19:41.631 "num_blocks": 65536, 00:19:41.631 "uuid": "bc51166f-9cdd-4c32-a2b4-25d77bcebc94", 00:19:41.631 "assigned_rate_limits": { 00:19:41.631 "rw_ios_per_sec": 0, 00:19:41.631 "rw_mbytes_per_sec": 0, 00:19:41.631 "r_mbytes_per_sec": 0, 00:19:41.631 "w_mbytes_per_sec": 0 00:19:41.631 }, 00:19:41.631 "claimed": true, 00:19:41.631 "claim_type": "exclusive_write", 00:19:41.631 "zoned": false, 00:19:41.631 "supported_io_types": { 00:19:41.631 "read": true, 00:19:41.631 "write": true, 00:19:41.631 "unmap": true, 00:19:41.631 "flush": true, 00:19:41.631 "reset": true, 00:19:41.631 "nvme_admin": false, 00:19:41.631 "nvme_io": false, 00:19:41.631 "nvme_io_md": false, 00:19:41.631 "write_zeroes": true, 00:19:41.631 "zcopy": true, 00:19:41.631 "get_zone_info": false, 00:19:41.631 "zone_management": false, 00:19:41.631 "zone_append": false, 00:19:41.631 "compare": false, 00:19:41.631 "compare_and_write": false, 00:19:41.631 "abort": true, 00:19:41.631 "seek_hole": false, 00:19:41.631 "seek_data": false, 00:19:41.631 "copy": true, 00:19:41.631 "nvme_iov_md": false 00:19:41.631 }, 00:19:41.631 "memory_domains": [ 00:19:41.631 { 00:19:41.631 "dma_device_id": "system", 00:19:41.631 "dma_device_type": 1 00:19:41.631 }, 00:19:41.631 { 00:19:41.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.631 "dma_device_type": 2 00:19:41.631 } 00:19:41.631 ], 00:19:41.631 "driver_specific": {} 00:19:41.631 } 00:19:41.631 ] 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.631 "name": "Existed_Raid", 00:19:41.631 "uuid": "add98bcc-efea-46d1-bf1d-5ab59245d54f", 00:19:41.631 "strip_size_kb": 64, 00:19:41.631 "state": "configuring", 00:19:41.631 "raid_level": "raid0", 00:19:41.631 "superblock": true, 00:19:41.631 "num_base_bdevs": 3, 00:19:41.631 "num_base_bdevs_discovered": 2, 00:19:41.631 "num_base_bdevs_operational": 3, 00:19:41.631 "base_bdevs_list": [ 00:19:41.631 { 00:19:41.631 "name": "BaseBdev1", 00:19:41.631 "uuid": "bc51166f-9cdd-4c32-a2b4-25d77bcebc94", 00:19:41.631 "is_configured": true, 00:19:41.631 "data_offset": 2048, 00:19:41.631 "data_size": 63488 00:19:41.631 }, 00:19:41.631 { 00:19:41.631 "name": null, 00:19:41.631 "uuid": "45ead237-8c38-4c1c-a980-450dde8ac1a6", 00:19:41.631 "is_configured": false, 00:19:41.631 "data_offset": 2048, 00:19:41.631 "data_size": 63488 00:19:41.631 }, 00:19:41.631 { 00:19:41.631 "name": "BaseBdev3", 00:19:41.631 "uuid": "39322f98-e8e1-4126-8e63-0a6df1ea7d07", 00:19:41.631 "is_configured": true, 00:19:41.631 "data_offset": 2048, 00:19:41.631 "data_size": 63488 00:19:41.631 } 00:19:41.631 ] 00:19:41.631 }' 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.631 00:46:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.198 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.198 00:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:42.457 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:42.457 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:42.716 [2024-07-25 00:46:05.274304] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.716 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.975 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.975 "name": "Existed_Raid", 00:19:42.975 "uuid": "add98bcc-efea-46d1-bf1d-5ab59245d54f", 00:19:42.975 "strip_size_kb": 64, 00:19:42.975 "state": "configuring", 00:19:42.975 "raid_level": "raid0", 00:19:42.975 "superblock": true, 00:19:42.975 "num_base_bdevs": 3, 00:19:42.975 "num_base_bdevs_discovered": 1, 00:19:42.975 "num_base_bdevs_operational": 3, 00:19:42.975 "base_bdevs_list": [ 00:19:42.975 { 00:19:42.975 "name": "BaseBdev1", 00:19:42.975 "uuid": "bc51166f-9cdd-4c32-a2b4-25d77bcebc94", 00:19:42.975 "is_configured": true, 00:19:42.975 "data_offset": 2048, 00:19:42.975 "data_size": 63488 00:19:42.975 }, 00:19:42.975 { 00:19:42.975 "name": null, 00:19:42.975 "uuid": "45ead237-8c38-4c1c-a980-450dde8ac1a6", 00:19:42.975 "is_configured": false, 00:19:42.975 "data_offset": 2048, 00:19:42.975 "data_size": 63488 00:19:42.975 }, 00:19:42.975 { 00:19:42.975 "name": null, 00:19:42.975 "uuid": "39322f98-e8e1-4126-8e63-0a6df1ea7d07", 00:19:42.975 "is_configured": false, 00:19:42.975 "data_offset": 2048, 00:19:42.975 "data_size": 63488 00:19:42.975 } 00:19:42.975 ] 00:19:42.975 }' 00:19:42.975 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.975 00:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.544 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:43.544 00:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.544 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:43.544 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:43.803 [2024-07-25 00:46:06.302569] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.803 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.063 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.063 "name": "Existed_Raid", 00:19:44.063 "uuid": "add98bcc-efea-46d1-bf1d-5ab59245d54f", 00:19:44.063 "strip_size_kb": 64, 00:19:44.063 "state": "configuring", 00:19:44.063 "raid_level": "raid0", 00:19:44.063 "superblock": true, 00:19:44.063 "num_base_bdevs": 3, 00:19:44.063 "num_base_bdevs_discovered": 2, 00:19:44.063 "num_base_bdevs_operational": 3, 00:19:44.063 "base_bdevs_list": [ 00:19:44.063 { 00:19:44.063 "name": "BaseBdev1", 00:19:44.063 "uuid": "bc51166f-9cdd-4c32-a2b4-25d77bcebc94", 00:19:44.063 "is_configured": true, 00:19:44.063 "data_offset": 2048, 00:19:44.063 "data_size": 63488 00:19:44.063 }, 00:19:44.063 { 00:19:44.063 "name": null, 00:19:44.063 "uuid": "45ead237-8c38-4c1c-a980-450dde8ac1a6", 00:19:44.063 "is_configured": false, 00:19:44.063 "data_offset": 2048, 00:19:44.063 "data_size": 63488 00:19:44.063 }, 00:19:44.063 { 00:19:44.063 "name": "BaseBdev3", 00:19:44.063 "uuid": "39322f98-e8e1-4126-8e63-0a6df1ea7d07", 00:19:44.063 "is_configured": true, 00:19:44.063 "data_offset": 2048, 00:19:44.063 "data_size": 63488 00:19:44.063 } 00:19:44.063 ] 00:19:44.063 }' 00:19:44.063 00:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.063 00:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.632 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:44.632 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.891 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:44.891 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:44.891 [2024-07-25 00:46:07.463072] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.150 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.409 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.409 "name": "Existed_Raid", 00:19:45.409 "uuid": "add98bcc-efea-46d1-bf1d-5ab59245d54f", 00:19:45.409 "strip_size_kb": 64, 00:19:45.409 "state": "configuring", 00:19:45.409 "raid_level": "raid0", 00:19:45.409 "superblock": true, 00:19:45.409 "num_base_bdevs": 3, 00:19:45.409 "num_base_bdevs_discovered": 1, 00:19:45.409 "num_base_bdevs_operational": 3, 00:19:45.409 "base_bdevs_list": [ 00:19:45.409 { 00:19:45.409 "name": null, 00:19:45.409 "uuid": "bc51166f-9cdd-4c32-a2b4-25d77bcebc94", 00:19:45.409 "is_configured": false, 00:19:45.409 "data_offset": 2048, 00:19:45.409 "data_size": 63488 00:19:45.409 }, 00:19:45.409 { 00:19:45.409 "name": null, 00:19:45.409 "uuid": "45ead237-8c38-4c1c-a980-450dde8ac1a6", 00:19:45.409 "is_configured": false, 00:19:45.409 "data_offset": 2048, 00:19:45.409 "data_size": 63488 00:19:45.409 }, 00:19:45.409 { 00:19:45.409 "name": "BaseBdev3", 00:19:45.409 "uuid": "39322f98-e8e1-4126-8e63-0a6df1ea7d07", 00:19:45.409 "is_configured": true, 00:19:45.409 "data_offset": 2048, 00:19:45.409 "data_size": 63488 00:19:45.409 } 00:19:45.409 ] 00:19:45.409 }' 00:19:45.409 00:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.409 00:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.979 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:45.979 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.979 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:45.979 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:46.238 [2024-07-25 00:46:08.694936] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.238 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.498 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.498 "name": "Existed_Raid", 00:19:46.498 "uuid": "add98bcc-efea-46d1-bf1d-5ab59245d54f", 00:19:46.498 "strip_size_kb": 64, 00:19:46.498 "state": "configuring", 00:19:46.498 "raid_level": "raid0", 00:19:46.498 "superblock": true, 00:19:46.498 "num_base_bdevs": 3, 00:19:46.498 "num_base_bdevs_discovered": 2, 00:19:46.498 "num_base_bdevs_operational": 3, 00:19:46.498 "base_bdevs_list": [ 00:19:46.498 { 00:19:46.498 "name": null, 00:19:46.498 "uuid": "bc51166f-9cdd-4c32-a2b4-25d77bcebc94", 00:19:46.498 "is_configured": false, 00:19:46.498 "data_offset": 2048, 00:19:46.498 "data_size": 63488 00:19:46.498 }, 00:19:46.498 { 00:19:46.498 "name": "BaseBdev2", 00:19:46.498 "uuid": "45ead237-8c38-4c1c-a980-450dde8ac1a6", 00:19:46.498 "is_configured": true, 00:19:46.498 "data_offset": 2048, 00:19:46.498 "data_size": 63488 00:19:46.498 }, 00:19:46.498 { 00:19:46.498 "name": "BaseBdev3", 00:19:46.498 "uuid": "39322f98-e8e1-4126-8e63-0a6df1ea7d07", 00:19:46.498 "is_configured": true, 00:19:46.498 "data_offset": 2048, 00:19:46.498 "data_size": 63488 00:19:46.498 } 00:19:46.498 ] 00:19:46.498 }' 00:19:46.498 00:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.498 00:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.067 00:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:47.067 00:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.067 00:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:47.067 00:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.067 00:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:47.327 00:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bc51166f-9cdd-4c32-a2b4-25d77bcebc94 00:19:47.586 [2024-07-25 00:46:10.185502] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:47.586 [2024-07-25 00:46:10.185678] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:47.586 [2024-07-25 00:46:10.185689] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:47.586 [2024-07-25 00:46:10.185782] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:47.586 [2024-07-25 00:46:10.186076] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:47.586 [2024-07-25 00:46:10.186087] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:19:47.586 [2024-07-25 00:46:10.186216] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.586 NewBaseBdev 00:19:47.586 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:47.586 00:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:47.586 00:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:47.586 00:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:47.586 00:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:47.586 00:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:47.586 00:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:47.846 00:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:48.105 [ 00:19:48.105 { 00:19:48.105 "name": "NewBaseBdev", 00:19:48.105 "aliases": [ 00:19:48.105 "bc51166f-9cdd-4c32-a2b4-25d77bcebc94" 00:19:48.105 ], 00:19:48.105 "product_name": "Malloc disk", 00:19:48.105 "block_size": 512, 00:19:48.105 "num_blocks": 65536, 00:19:48.105 "uuid": "bc51166f-9cdd-4c32-a2b4-25d77bcebc94", 00:19:48.105 "assigned_rate_limits": { 00:19:48.105 "rw_ios_per_sec": 0, 00:19:48.105 "rw_mbytes_per_sec": 0, 00:19:48.105 "r_mbytes_per_sec": 0, 00:19:48.105 "w_mbytes_per_sec": 0 00:19:48.105 }, 00:19:48.105 "claimed": true, 00:19:48.105 "claim_type": "exclusive_write", 00:19:48.105 "zoned": false, 00:19:48.105 "supported_io_types": { 00:19:48.105 "read": true, 00:19:48.105 "write": true, 00:19:48.105 "unmap": true, 00:19:48.105 "flush": true, 00:19:48.105 "reset": true, 00:19:48.105 "nvme_admin": false, 00:19:48.105 "nvme_io": false, 00:19:48.105 "nvme_io_md": false, 00:19:48.105 "write_zeroes": true, 00:19:48.105 "zcopy": true, 00:19:48.105 "get_zone_info": false, 00:19:48.105 "zone_management": false, 00:19:48.105 "zone_append": false, 00:19:48.105 "compare": false, 00:19:48.105 "compare_and_write": false, 00:19:48.105 "abort": true, 00:19:48.105 "seek_hole": false, 00:19:48.105 "seek_data": false, 00:19:48.105 "copy": true, 00:19:48.105 "nvme_iov_md": false 00:19:48.105 }, 00:19:48.106 "memory_domains": [ 00:19:48.106 { 00:19:48.106 "dma_device_id": "system", 00:19:48.106 "dma_device_type": 1 00:19:48.106 }, 00:19:48.106 { 00:19:48.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.106 "dma_device_type": 2 00:19:48.106 } 00:19:48.106 ], 00:19:48.106 "driver_specific": {} 00:19:48.106 } 00:19:48.106 ] 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.106 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.365 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.365 "name": "Existed_Raid", 00:19:48.365 "uuid": "add98bcc-efea-46d1-bf1d-5ab59245d54f", 00:19:48.365 "strip_size_kb": 64, 00:19:48.365 "state": "online", 00:19:48.365 "raid_level": "raid0", 00:19:48.365 "superblock": true, 00:19:48.365 "num_base_bdevs": 3, 00:19:48.365 "num_base_bdevs_discovered": 3, 00:19:48.365 "num_base_bdevs_operational": 3, 00:19:48.365 "base_bdevs_list": [ 00:19:48.365 { 00:19:48.365 "name": "NewBaseBdev", 00:19:48.365 "uuid": "bc51166f-9cdd-4c32-a2b4-25d77bcebc94", 00:19:48.365 "is_configured": true, 00:19:48.365 "data_offset": 2048, 00:19:48.365 "data_size": 63488 00:19:48.365 }, 00:19:48.365 { 00:19:48.365 "name": "BaseBdev2", 00:19:48.365 "uuid": "45ead237-8c38-4c1c-a980-450dde8ac1a6", 00:19:48.365 "is_configured": true, 00:19:48.365 "data_offset": 2048, 00:19:48.365 "data_size": 63488 00:19:48.365 }, 00:19:48.365 { 00:19:48.365 "name": "BaseBdev3", 00:19:48.365 "uuid": "39322f98-e8e1-4126-8e63-0a6df1ea7d07", 00:19:48.365 "is_configured": true, 00:19:48.365 "data_offset": 2048, 00:19:48.365 "data_size": 63488 00:19:48.365 } 00:19:48.365 ] 00:19:48.365 }' 00:19:48.365 00:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.365 00:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.932 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:48.932 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:48.932 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:48.932 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:48.932 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:48.932 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:48.932 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:48.932 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:48.932 [2024-07-25 00:46:11.534016] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:48.932 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:48.932 "name": "Existed_Raid", 00:19:48.932 "aliases": [ 00:19:48.932 "add98bcc-efea-46d1-bf1d-5ab59245d54f" 00:19:48.932 ], 00:19:48.932 "product_name": "Raid Volume", 00:19:48.932 "block_size": 512, 00:19:48.932 "num_blocks": 190464, 00:19:48.932 "uuid": "add98bcc-efea-46d1-bf1d-5ab59245d54f", 00:19:48.932 "assigned_rate_limits": { 00:19:48.932 "rw_ios_per_sec": 0, 00:19:48.932 "rw_mbytes_per_sec": 0, 00:19:48.932 "r_mbytes_per_sec": 0, 00:19:48.932 "w_mbytes_per_sec": 0 00:19:48.932 }, 00:19:48.932 "claimed": false, 00:19:48.932 "zoned": false, 00:19:48.932 "supported_io_types": { 00:19:48.932 "read": true, 00:19:48.932 "write": true, 00:19:48.932 "unmap": true, 00:19:48.932 "flush": true, 00:19:48.932 "reset": true, 00:19:48.932 "nvme_admin": false, 00:19:48.932 "nvme_io": false, 00:19:48.932 "nvme_io_md": false, 00:19:48.932 "write_zeroes": true, 00:19:48.932 "zcopy": false, 00:19:48.932 "get_zone_info": false, 00:19:48.932 "zone_management": false, 00:19:48.932 "zone_append": false, 00:19:48.932 "compare": false, 00:19:48.932 "compare_and_write": false, 00:19:48.932 "abort": false, 00:19:48.932 "seek_hole": false, 00:19:48.932 "seek_data": false, 00:19:48.932 "copy": false, 00:19:48.932 "nvme_iov_md": false 00:19:48.932 }, 00:19:48.932 "memory_domains": [ 00:19:48.932 { 00:19:48.932 "dma_device_id": "system", 00:19:48.932 "dma_device_type": 1 00:19:48.932 }, 00:19:48.932 { 00:19:48.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.932 "dma_device_type": 2 00:19:48.932 }, 00:19:48.932 { 00:19:48.932 "dma_device_id": "system", 00:19:48.932 "dma_device_type": 1 00:19:48.932 }, 00:19:48.932 { 00:19:48.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.932 "dma_device_type": 2 00:19:48.932 }, 00:19:48.932 { 00:19:48.932 "dma_device_id": "system", 00:19:48.932 "dma_device_type": 1 00:19:48.932 }, 00:19:48.932 { 00:19:48.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.932 "dma_device_type": 2 00:19:48.932 } 00:19:48.932 ], 00:19:48.932 "driver_specific": { 00:19:48.932 "raid": { 00:19:48.932 "uuid": "add98bcc-efea-46d1-bf1d-5ab59245d54f", 00:19:48.932 "strip_size_kb": 64, 00:19:48.932 "state": "online", 00:19:48.932 "raid_level": "raid0", 00:19:48.932 "superblock": true, 00:19:48.932 "num_base_bdevs": 3, 00:19:48.932 "num_base_bdevs_discovered": 3, 00:19:48.932 "num_base_bdevs_operational": 3, 00:19:48.932 "base_bdevs_list": [ 00:19:48.932 { 00:19:48.932 "name": "NewBaseBdev", 00:19:48.932 "uuid": "bc51166f-9cdd-4c32-a2b4-25d77bcebc94", 00:19:48.932 "is_configured": true, 00:19:48.932 "data_offset": 2048, 00:19:48.932 "data_size": 63488 00:19:48.932 }, 00:19:48.932 { 00:19:48.932 "name": "BaseBdev2", 00:19:48.932 "uuid": "45ead237-8c38-4c1c-a980-450dde8ac1a6", 00:19:48.932 "is_configured": true, 00:19:48.932 "data_offset": 2048, 00:19:48.932 "data_size": 63488 00:19:48.932 }, 00:19:48.932 { 00:19:48.932 "name": "BaseBdev3", 00:19:48.932 "uuid": "39322f98-e8e1-4126-8e63-0a6df1ea7d07", 00:19:48.932 "is_configured": true, 00:19:48.932 "data_offset": 2048, 00:19:48.932 "data_size": 63488 00:19:48.932 } 00:19:48.932 ] 00:19:48.932 } 00:19:48.932 } 00:19:48.932 }' 00:19:48.932 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:49.190 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:49.190 BaseBdev2 00:19:49.190 BaseBdev3' 00:19:49.190 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:49.190 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:49.190 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:49.190 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:49.190 "name": "NewBaseBdev", 00:19:49.190 "aliases": [ 00:19:49.190 "bc51166f-9cdd-4c32-a2b4-25d77bcebc94" 00:19:49.190 ], 00:19:49.190 "product_name": "Malloc disk", 00:19:49.190 "block_size": 512, 00:19:49.190 "num_blocks": 65536, 00:19:49.190 "uuid": "bc51166f-9cdd-4c32-a2b4-25d77bcebc94", 00:19:49.190 "assigned_rate_limits": { 00:19:49.190 "rw_ios_per_sec": 0, 00:19:49.190 "rw_mbytes_per_sec": 0, 00:19:49.190 "r_mbytes_per_sec": 0, 00:19:49.190 "w_mbytes_per_sec": 0 00:19:49.190 }, 00:19:49.190 "claimed": true, 00:19:49.190 "claim_type": "exclusive_write", 00:19:49.190 "zoned": false, 00:19:49.190 "supported_io_types": { 00:19:49.190 "read": true, 00:19:49.190 "write": true, 00:19:49.190 "unmap": true, 00:19:49.191 "flush": true, 00:19:49.191 "reset": true, 00:19:49.191 "nvme_admin": false, 00:19:49.191 "nvme_io": false, 00:19:49.191 "nvme_io_md": false, 00:19:49.191 "write_zeroes": true, 00:19:49.191 "zcopy": true, 00:19:49.191 "get_zone_info": false, 00:19:49.191 "zone_management": false, 00:19:49.191 "zone_append": false, 00:19:49.191 "compare": false, 00:19:49.191 "compare_and_write": false, 00:19:49.191 "abort": true, 00:19:49.191 "seek_hole": false, 00:19:49.191 "seek_data": false, 00:19:49.191 "copy": true, 00:19:49.191 "nvme_iov_md": false 00:19:49.191 }, 00:19:49.191 "memory_domains": [ 00:19:49.191 { 00:19:49.191 "dma_device_id": "system", 00:19:49.191 "dma_device_type": 1 00:19:49.191 }, 00:19:49.191 { 00:19:49.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.191 "dma_device_type": 2 00:19:49.191 } 00:19:49.191 ], 00:19:49.191 "driver_specific": {} 00:19:49.191 }' 00:19:49.191 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.191 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.464 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:49.464 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:49.464 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:49.464 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:49.464 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.464 00:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.464 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:49.464 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:49.464 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:49.737 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:49.737 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:49.737 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:49.737 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:49.737 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:49.737 "name": "BaseBdev2", 00:19:49.737 "aliases": [ 00:19:49.737 "45ead237-8c38-4c1c-a980-450dde8ac1a6" 00:19:49.737 ], 00:19:49.737 "product_name": "Malloc disk", 00:19:49.737 "block_size": 512, 00:19:49.737 "num_blocks": 65536, 00:19:49.737 "uuid": "45ead237-8c38-4c1c-a980-450dde8ac1a6", 00:19:49.737 "assigned_rate_limits": { 00:19:49.737 "rw_ios_per_sec": 0, 00:19:49.737 "rw_mbytes_per_sec": 0, 00:19:49.737 "r_mbytes_per_sec": 0, 00:19:49.737 "w_mbytes_per_sec": 0 00:19:49.737 }, 00:19:49.737 "claimed": true, 00:19:49.737 "claim_type": "exclusive_write", 00:19:49.737 "zoned": false, 00:19:49.737 "supported_io_types": { 00:19:49.737 "read": true, 00:19:49.737 "write": true, 00:19:49.737 "unmap": true, 00:19:49.737 "flush": true, 00:19:49.737 "reset": true, 00:19:49.737 "nvme_admin": false, 00:19:49.737 "nvme_io": false, 00:19:49.737 "nvme_io_md": false, 00:19:49.737 "write_zeroes": true, 00:19:49.737 "zcopy": true, 00:19:49.737 "get_zone_info": false, 00:19:49.737 "zone_management": false, 00:19:49.737 "zone_append": false, 00:19:49.737 "compare": false, 00:19:49.737 "compare_and_write": false, 00:19:49.737 "abort": true, 00:19:49.737 "seek_hole": false, 00:19:49.737 "seek_data": false, 00:19:49.737 "copy": true, 00:19:49.737 "nvme_iov_md": false 00:19:49.737 }, 00:19:49.737 "memory_domains": [ 00:19:49.737 { 00:19:49.737 "dma_device_id": "system", 00:19:49.737 "dma_device_type": 1 00:19:49.737 }, 00:19:49.737 { 00:19:49.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.737 "dma_device_type": 2 00:19:49.737 } 00:19:49.737 ], 00:19:49.737 "driver_specific": {} 00:19:49.737 }' 00:19:49.737 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.737 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.996 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:49.996 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:49.996 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:49.996 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:49.996 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.996 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.996 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:49.996 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:49.996 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.254 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:50.254 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:50.254 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:50.254 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:50.513 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:50.513 "name": "BaseBdev3", 00:19:50.513 "aliases": [ 00:19:50.513 "39322f98-e8e1-4126-8e63-0a6df1ea7d07" 00:19:50.513 ], 00:19:50.513 "product_name": "Malloc disk", 00:19:50.513 "block_size": 512, 00:19:50.513 "num_blocks": 65536, 00:19:50.513 "uuid": "39322f98-e8e1-4126-8e63-0a6df1ea7d07", 00:19:50.513 "assigned_rate_limits": { 00:19:50.513 "rw_ios_per_sec": 0, 00:19:50.513 "rw_mbytes_per_sec": 0, 00:19:50.513 "r_mbytes_per_sec": 0, 00:19:50.513 "w_mbytes_per_sec": 0 00:19:50.513 }, 00:19:50.513 "claimed": true, 00:19:50.513 "claim_type": "exclusive_write", 00:19:50.513 "zoned": false, 00:19:50.513 "supported_io_types": { 00:19:50.513 "read": true, 00:19:50.513 "write": true, 00:19:50.513 "unmap": true, 00:19:50.513 "flush": true, 00:19:50.513 "reset": true, 00:19:50.513 "nvme_admin": false, 00:19:50.513 "nvme_io": false, 00:19:50.513 "nvme_io_md": false, 00:19:50.513 "write_zeroes": true, 00:19:50.513 "zcopy": true, 00:19:50.513 "get_zone_info": false, 00:19:50.513 "zone_management": false, 00:19:50.513 "zone_append": false, 00:19:50.513 "compare": false, 00:19:50.513 "compare_and_write": false, 00:19:50.513 "abort": true, 00:19:50.513 "seek_hole": false, 00:19:50.513 "seek_data": false, 00:19:50.513 "copy": true, 00:19:50.513 "nvme_iov_md": false 00:19:50.513 }, 00:19:50.513 "memory_domains": [ 00:19:50.513 { 00:19:50.513 "dma_device_id": "system", 00:19:50.513 "dma_device_type": 1 00:19:50.513 }, 00:19:50.513 { 00:19:50.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.513 "dma_device_type": 2 00:19:50.513 } 00:19:50.513 ], 00:19:50.513 "driver_specific": {} 00:19:50.513 }' 00:19:50.513 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.513 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.513 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:50.513 00:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.513 00:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.513 00:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:50.513 00:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.513 00:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.771 00:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:50.771 00:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.771 00:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.771 00:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:50.771 00:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:51.029 [2024-07-25 00:46:13.530940] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:51.029 [2024-07-25 00:46:13.530973] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:51.029 [2024-07-25 00:46:13.531031] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:51.029 [2024-07-25 00:46:13.531085] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:51.029 [2024-07-25 00:46:13.531094] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:19:51.029 00:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 127235 00:19:51.029 00:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 127235 ']' 00:19:51.029 00:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 127235 00:19:51.030 00:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:51.030 00:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.030 00:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127235 00:19:51.030 00:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:51.030 00:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:51.030 00:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127235' 00:19:51.030 killing process with pid 127235 00:19:51.030 00:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 127235 00:19:51.030 [2024-07-25 00:46:13.577498] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:51.030 00:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 127235 00:19:51.288 [2024-07-25 00:46:13.872457] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:52.666 00:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:52.666 00:19:52.666 real 0m27.455s 00:19:52.666 user 0m49.131s 00:19:52.666 sys 0m4.248s 00:19:52.666 00:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.666 00:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.666 ************************************ 00:19:52.666 END TEST raid_state_function_test_sb 00:19:52.666 ************************************ 00:19:52.666 00:46:15 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:19:52.666 00:46:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:52.666 00:46:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.666 00:46:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:52.666 ************************************ 00:19:52.666 START TEST raid_superblock_test 00:19:52.666 ************************************ 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=128189 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 128189 /var/tmp/spdk-raid.sock 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 128189 ']' 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.666 00:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.925 [2024-07-25 00:46:15.378255] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:19:52.925 [2024-07-25 00:46:15.378521] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128189 ] 00:19:52.925 [2024-07-25 00:46:15.566967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.183 [2024-07-25 00:46:15.814108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.442 [2024-07-25 00:46:16.015600] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.702 00:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.702 00:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:19:53.702 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:53.702 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:53.702 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:53.702 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:53.702 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:53.702 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:53.702 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:53.702 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:53.702 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:53.961 malloc1 00:19:54.221 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:54.221 [2024-07-25 00:46:16.836847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:54.221 [2024-07-25 00:46:16.836951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.221 [2024-07-25 00:46:16.836993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:54.221 [2024-07-25 00:46:16.837014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.221 [2024-07-25 00:46:16.839355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.221 [2024-07-25 00:46:16.839404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:54.221 pt1 00:19:54.221 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:54.221 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:54.221 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:54.221 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:54.221 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:54.221 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.221 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.221 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.221 00:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:54.790 malloc2 00:19:54.790 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:54.790 [2024-07-25 00:46:17.320713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:54.790 [2024-07-25 00:46:17.320813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.790 [2024-07-25 00:46:17.320860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:54.790 [2024-07-25 00:46:17.320882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.790 [2024-07-25 00:46:17.323096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.790 [2024-07-25 00:46:17.323160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:54.790 pt2 00:19:54.790 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:54.791 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:54.791 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:19:54.791 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:19:54.791 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:54.791 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.791 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.791 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.791 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:55.051 malloc3 00:19:55.051 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:55.310 [2024-07-25 00:46:17.709187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:55.310 [2024-07-25 00:46:17.709274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.310 [2024-07-25 00:46:17.709319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:55.310 [2024-07-25 00:46:17.709343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.310 [2024-07-25 00:46:17.711545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.310 [2024-07-25 00:46:17.711596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:55.310 pt3 00:19:55.310 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:55.310 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:55.310 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:55.311 [2024-07-25 00:46:17.901302] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:55.311 [2024-07-25 00:46:17.903219] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:55.311 [2024-07-25 00:46:17.903316] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:55.311 [2024-07-25 00:46:17.903499] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:55.311 [2024-07-25 00:46:17.903516] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:55.311 [2024-07-25 00:46:17.903639] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:55.311 [2024-07-25 00:46:17.903974] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:55.311 [2024-07-25 00:46:17.903992] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:19:55.311 [2024-07-25 00:46:17.904142] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.311 00:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.570 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.570 "name": "raid_bdev1", 00:19:55.570 "uuid": "dcb0c8f2-b5a0-49c8-bdad-e0d14021be73", 00:19:55.570 "strip_size_kb": 64, 00:19:55.570 "state": "online", 00:19:55.571 "raid_level": "raid0", 00:19:55.571 "superblock": true, 00:19:55.571 "num_base_bdevs": 3, 00:19:55.571 "num_base_bdevs_discovered": 3, 00:19:55.571 "num_base_bdevs_operational": 3, 00:19:55.571 "base_bdevs_list": [ 00:19:55.571 { 00:19:55.571 "name": "pt1", 00:19:55.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:55.571 "is_configured": true, 00:19:55.571 "data_offset": 2048, 00:19:55.571 "data_size": 63488 00:19:55.571 }, 00:19:55.571 { 00:19:55.571 "name": "pt2", 00:19:55.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.571 "is_configured": true, 00:19:55.571 "data_offset": 2048, 00:19:55.571 "data_size": 63488 00:19:55.571 }, 00:19:55.571 { 00:19:55.571 "name": "pt3", 00:19:55.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:55.571 "is_configured": true, 00:19:55.571 "data_offset": 2048, 00:19:55.571 "data_size": 63488 00:19:55.571 } 00:19:55.571 ] 00:19:55.571 }' 00:19:55.571 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.571 00:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.140 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:56.140 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:56.140 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:56.140 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:56.140 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:56.140 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:56.140 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:56.140 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:56.400 [2024-07-25 00:46:18.945667] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:56.400 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:56.400 "name": "raid_bdev1", 00:19:56.400 "aliases": [ 00:19:56.400 "dcb0c8f2-b5a0-49c8-bdad-e0d14021be73" 00:19:56.400 ], 00:19:56.400 "product_name": "Raid Volume", 00:19:56.400 "block_size": 512, 00:19:56.400 "num_blocks": 190464, 00:19:56.400 "uuid": "dcb0c8f2-b5a0-49c8-bdad-e0d14021be73", 00:19:56.400 "assigned_rate_limits": { 00:19:56.400 "rw_ios_per_sec": 0, 00:19:56.400 "rw_mbytes_per_sec": 0, 00:19:56.400 "r_mbytes_per_sec": 0, 00:19:56.400 "w_mbytes_per_sec": 0 00:19:56.400 }, 00:19:56.400 "claimed": false, 00:19:56.400 "zoned": false, 00:19:56.400 "supported_io_types": { 00:19:56.400 "read": true, 00:19:56.400 "write": true, 00:19:56.400 "unmap": true, 00:19:56.400 "flush": true, 00:19:56.400 "reset": true, 00:19:56.400 "nvme_admin": false, 00:19:56.400 "nvme_io": false, 00:19:56.400 "nvme_io_md": false, 00:19:56.400 "write_zeroes": true, 00:19:56.400 "zcopy": false, 00:19:56.400 "get_zone_info": false, 00:19:56.400 "zone_management": false, 00:19:56.400 "zone_append": false, 00:19:56.400 "compare": false, 00:19:56.400 "compare_and_write": false, 00:19:56.400 "abort": false, 00:19:56.400 "seek_hole": false, 00:19:56.400 "seek_data": false, 00:19:56.400 "copy": false, 00:19:56.400 "nvme_iov_md": false 00:19:56.400 }, 00:19:56.400 "memory_domains": [ 00:19:56.400 { 00:19:56.400 "dma_device_id": "system", 00:19:56.400 "dma_device_type": 1 00:19:56.400 }, 00:19:56.400 { 00:19:56.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.400 "dma_device_type": 2 00:19:56.400 }, 00:19:56.400 { 00:19:56.400 "dma_device_id": "system", 00:19:56.400 "dma_device_type": 1 00:19:56.400 }, 00:19:56.400 { 00:19:56.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.400 "dma_device_type": 2 00:19:56.400 }, 00:19:56.400 { 00:19:56.400 "dma_device_id": "system", 00:19:56.400 "dma_device_type": 1 00:19:56.400 }, 00:19:56.400 { 00:19:56.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.400 "dma_device_type": 2 00:19:56.400 } 00:19:56.400 ], 00:19:56.400 "driver_specific": { 00:19:56.400 "raid": { 00:19:56.400 "uuid": "dcb0c8f2-b5a0-49c8-bdad-e0d14021be73", 00:19:56.400 "strip_size_kb": 64, 00:19:56.400 "state": "online", 00:19:56.400 "raid_level": "raid0", 00:19:56.400 "superblock": true, 00:19:56.400 "num_base_bdevs": 3, 00:19:56.400 "num_base_bdevs_discovered": 3, 00:19:56.400 "num_base_bdevs_operational": 3, 00:19:56.400 "base_bdevs_list": [ 00:19:56.400 { 00:19:56.400 "name": "pt1", 00:19:56.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.400 "is_configured": true, 00:19:56.400 "data_offset": 2048, 00:19:56.400 "data_size": 63488 00:19:56.400 }, 00:19:56.400 { 00:19:56.400 "name": "pt2", 00:19:56.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.400 "is_configured": true, 00:19:56.400 "data_offset": 2048, 00:19:56.400 "data_size": 63488 00:19:56.400 }, 00:19:56.400 { 00:19:56.400 "name": "pt3", 00:19:56.400 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:56.400 "is_configured": true, 00:19:56.400 "data_offset": 2048, 00:19:56.400 "data_size": 63488 00:19:56.400 } 00:19:56.400 ] 00:19:56.400 } 00:19:56.400 } 00:19:56.400 }' 00:19:56.400 00:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:56.400 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:56.400 pt2 00:19:56.400 pt3' 00:19:56.400 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:56.400 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:56.400 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:56.660 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:56.660 "name": "pt1", 00:19:56.660 "aliases": [ 00:19:56.660 "00000000-0000-0000-0000-000000000001" 00:19:56.660 ], 00:19:56.660 "product_name": "passthru", 00:19:56.660 "block_size": 512, 00:19:56.660 "num_blocks": 65536, 00:19:56.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.660 "assigned_rate_limits": { 00:19:56.660 "rw_ios_per_sec": 0, 00:19:56.660 "rw_mbytes_per_sec": 0, 00:19:56.660 "r_mbytes_per_sec": 0, 00:19:56.660 "w_mbytes_per_sec": 0 00:19:56.660 }, 00:19:56.660 "claimed": true, 00:19:56.660 "claim_type": "exclusive_write", 00:19:56.660 "zoned": false, 00:19:56.660 "supported_io_types": { 00:19:56.660 "read": true, 00:19:56.660 "write": true, 00:19:56.660 "unmap": true, 00:19:56.660 "flush": true, 00:19:56.660 "reset": true, 00:19:56.660 "nvme_admin": false, 00:19:56.660 "nvme_io": false, 00:19:56.660 "nvme_io_md": false, 00:19:56.660 "write_zeroes": true, 00:19:56.660 "zcopy": true, 00:19:56.660 "get_zone_info": false, 00:19:56.660 "zone_management": false, 00:19:56.660 "zone_append": false, 00:19:56.660 "compare": false, 00:19:56.660 "compare_and_write": false, 00:19:56.660 "abort": true, 00:19:56.660 "seek_hole": false, 00:19:56.660 "seek_data": false, 00:19:56.660 "copy": true, 00:19:56.660 "nvme_iov_md": false 00:19:56.660 }, 00:19:56.660 "memory_domains": [ 00:19:56.660 { 00:19:56.660 "dma_device_id": "system", 00:19:56.660 "dma_device_type": 1 00:19:56.660 }, 00:19:56.660 { 00:19:56.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.660 "dma_device_type": 2 00:19:56.660 } 00:19:56.660 ], 00:19:56.660 "driver_specific": { 00:19:56.660 "passthru": { 00:19:56.660 "name": "pt1", 00:19:56.660 "base_bdev_name": "malloc1" 00:19:56.660 } 00:19:56.660 } 00:19:56.660 }' 00:19:56.660 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.920 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.920 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:56.920 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.920 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.920 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:56.920 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.920 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.920 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:56.920 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.179 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.179 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:57.179 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:57.179 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:57.179 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:57.439 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:57.439 "name": "pt2", 00:19:57.439 "aliases": [ 00:19:57.439 "00000000-0000-0000-0000-000000000002" 00:19:57.439 ], 00:19:57.439 "product_name": "passthru", 00:19:57.439 "block_size": 512, 00:19:57.439 "num_blocks": 65536, 00:19:57.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.439 "assigned_rate_limits": { 00:19:57.439 "rw_ios_per_sec": 0, 00:19:57.439 "rw_mbytes_per_sec": 0, 00:19:57.439 "r_mbytes_per_sec": 0, 00:19:57.439 "w_mbytes_per_sec": 0 00:19:57.439 }, 00:19:57.439 "claimed": true, 00:19:57.439 "claim_type": "exclusive_write", 00:19:57.439 "zoned": false, 00:19:57.439 "supported_io_types": { 00:19:57.439 "read": true, 00:19:57.439 "write": true, 00:19:57.439 "unmap": true, 00:19:57.439 "flush": true, 00:19:57.439 "reset": true, 00:19:57.439 "nvme_admin": false, 00:19:57.439 "nvme_io": false, 00:19:57.439 "nvme_io_md": false, 00:19:57.439 "write_zeroes": true, 00:19:57.439 "zcopy": true, 00:19:57.439 "get_zone_info": false, 00:19:57.439 "zone_management": false, 00:19:57.439 "zone_append": false, 00:19:57.439 "compare": false, 00:19:57.439 "compare_and_write": false, 00:19:57.439 "abort": true, 00:19:57.439 "seek_hole": false, 00:19:57.439 "seek_data": false, 00:19:57.439 "copy": true, 00:19:57.439 "nvme_iov_md": false 00:19:57.439 }, 00:19:57.439 "memory_domains": [ 00:19:57.439 { 00:19:57.439 "dma_device_id": "system", 00:19:57.439 "dma_device_type": 1 00:19:57.439 }, 00:19:57.439 { 00:19:57.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.439 "dma_device_type": 2 00:19:57.439 } 00:19:57.439 ], 00:19:57.439 "driver_specific": { 00:19:57.439 "passthru": { 00:19:57.439 "name": "pt2", 00:19:57.439 "base_bdev_name": "malloc2" 00:19:57.439 } 00:19:57.439 } 00:19:57.439 }' 00:19:57.439 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.439 00:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.439 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:57.439 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.439 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.699 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:57.699 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.699 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.699 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:57.699 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.699 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.699 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:57.699 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:57.699 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:57.699 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:57.959 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:57.959 "name": "pt3", 00:19:57.959 "aliases": [ 00:19:57.959 "00000000-0000-0000-0000-000000000003" 00:19:57.959 ], 00:19:57.959 "product_name": "passthru", 00:19:57.959 "block_size": 512, 00:19:57.959 "num_blocks": 65536, 00:19:57.959 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:57.959 "assigned_rate_limits": { 00:19:57.959 "rw_ios_per_sec": 0, 00:19:57.959 "rw_mbytes_per_sec": 0, 00:19:57.959 "r_mbytes_per_sec": 0, 00:19:57.959 "w_mbytes_per_sec": 0 00:19:57.959 }, 00:19:57.959 "claimed": true, 00:19:57.959 "claim_type": "exclusive_write", 00:19:57.959 "zoned": false, 00:19:57.959 "supported_io_types": { 00:19:57.959 "read": true, 00:19:57.959 "write": true, 00:19:57.959 "unmap": true, 00:19:57.959 "flush": true, 00:19:57.959 "reset": true, 00:19:57.959 "nvme_admin": false, 00:19:57.959 "nvme_io": false, 00:19:57.959 "nvme_io_md": false, 00:19:57.959 "write_zeroes": true, 00:19:57.959 "zcopy": true, 00:19:57.959 "get_zone_info": false, 00:19:57.959 "zone_management": false, 00:19:57.959 "zone_append": false, 00:19:57.959 "compare": false, 00:19:57.959 "compare_and_write": false, 00:19:57.959 "abort": true, 00:19:57.959 "seek_hole": false, 00:19:57.959 "seek_data": false, 00:19:57.959 "copy": true, 00:19:57.959 "nvme_iov_md": false 00:19:57.959 }, 00:19:57.959 "memory_domains": [ 00:19:57.959 { 00:19:57.959 "dma_device_id": "system", 00:19:57.959 "dma_device_type": 1 00:19:57.959 }, 00:19:57.959 { 00:19:57.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.959 "dma_device_type": 2 00:19:57.959 } 00:19:57.959 ], 00:19:57.959 "driver_specific": { 00:19:57.959 "passthru": { 00:19:57.959 "name": "pt3", 00:19:57.959 "base_bdev_name": "malloc3" 00:19:57.959 } 00:19:57.959 } 00:19:57.959 }' 00:19:57.959 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.959 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:58.219 00:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:58.478 [2024-07-25 00:46:21.006029] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:58.478 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=dcb0c8f2-b5a0-49c8-bdad-e0d14021be73 00:19:58.478 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z dcb0c8f2-b5a0-49c8-bdad-e0d14021be73 ']' 00:19:58.478 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:58.738 [2024-07-25 00:46:21.249865] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.738 [2024-07-25 00:46:21.250002] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.738 [2024-07-25 00:46:21.250239] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.738 [2024-07-25 00:46:21.250414] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.738 [2024-07-25 00:46:21.250498] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:19:58.738 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.738 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:58.998 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:58.998 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:58.998 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:58.998 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:59.258 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:59.258 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:59.517 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:59.517 00:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:59.517 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:59.517 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:59.777 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:59.777 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:59.777 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:59.777 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:59.777 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.777 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.777 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.777 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.777 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.778 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.778 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.778 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:59.778 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:00.037 [2024-07-25 00:46:22.440118] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:00.037 [2024-07-25 00:46:22.442300] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:00.037 [2024-07-25 00:46:22.442519] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:00.037 [2024-07-25 00:46:22.442602] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:00.037 [2024-07-25 00:46:22.442798] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:00.037 [2024-07-25 00:46:22.442926] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:00.037 [2024-07-25 00:46:22.442982] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.037 [2024-07-25 00:46:22.443156] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:20:00.037 request: 00:20:00.037 { 00:20:00.037 "name": "raid_bdev1", 00:20:00.037 "raid_level": "raid0", 00:20:00.037 "base_bdevs": [ 00:20:00.037 "malloc1", 00:20:00.037 "malloc2", 00:20:00.037 "malloc3" 00:20:00.037 ], 00:20:00.037 "strip_size_kb": 64, 00:20:00.037 "superblock": false, 00:20:00.037 "method": "bdev_raid_create", 00:20:00.037 "req_id": 1 00:20:00.037 } 00:20:00.037 Got JSON-RPC error response 00:20:00.037 response: 00:20:00.037 { 00:20:00.037 "code": -17, 00:20:00.037 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:00.037 } 00:20:00.037 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:20:00.037 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.037 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.038 00:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.038 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.038 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:00.297 [2024-07-25 00:46:22.848118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:00.297 [2024-07-25 00:46:22.848381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.297 [2024-07-25 00:46:22.848450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:00.297 [2024-07-25 00:46:22.848551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.297 [2024-07-25 00:46:22.850915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.297 [2024-07-25 00:46:22.851094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:00.297 [2024-07-25 00:46:22.851301] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:00.297 [2024-07-25 00:46:22.851470] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:00.297 pt1 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.297 00:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.557 00:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.557 "name": "raid_bdev1", 00:20:00.557 "uuid": "dcb0c8f2-b5a0-49c8-bdad-e0d14021be73", 00:20:00.557 "strip_size_kb": 64, 00:20:00.557 "state": "configuring", 00:20:00.557 "raid_level": "raid0", 00:20:00.557 "superblock": true, 00:20:00.557 "num_base_bdevs": 3, 00:20:00.557 "num_base_bdevs_discovered": 1, 00:20:00.557 "num_base_bdevs_operational": 3, 00:20:00.557 "base_bdevs_list": [ 00:20:00.557 { 00:20:00.557 "name": "pt1", 00:20:00.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:00.557 "is_configured": true, 00:20:00.557 "data_offset": 2048, 00:20:00.557 "data_size": 63488 00:20:00.557 }, 00:20:00.557 { 00:20:00.557 "name": null, 00:20:00.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.557 "is_configured": false, 00:20:00.557 "data_offset": 2048, 00:20:00.557 "data_size": 63488 00:20:00.557 }, 00:20:00.557 { 00:20:00.557 "name": null, 00:20:00.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:00.557 "is_configured": false, 00:20:00.557 "data_offset": 2048, 00:20:00.557 "data_size": 63488 00:20:00.557 } 00:20:00.557 ] 00:20:00.557 }' 00:20:00.557 00:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.557 00:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.125 00:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:20:01.125 00:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:01.383 [2024-07-25 00:46:23.872299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:01.383 [2024-07-25 00:46:23.872531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.383 [2024-07-25 00:46:23.872599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:01.383 [2024-07-25 00:46:23.872689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.383 [2024-07-25 00:46:23.873194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.383 [2024-07-25 00:46:23.873344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:01.383 [2024-07-25 00:46:23.873532] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:01.383 [2024-07-25 00:46:23.873636] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:01.383 pt2 00:20:01.383 00:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:01.642 [2024-07-25 00:46:24.128370] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.642 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.900 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.900 "name": "raid_bdev1", 00:20:01.900 "uuid": "dcb0c8f2-b5a0-49c8-bdad-e0d14021be73", 00:20:01.900 "strip_size_kb": 64, 00:20:01.900 "state": "configuring", 00:20:01.900 "raid_level": "raid0", 00:20:01.900 "superblock": true, 00:20:01.900 "num_base_bdevs": 3, 00:20:01.900 "num_base_bdevs_discovered": 1, 00:20:01.900 "num_base_bdevs_operational": 3, 00:20:01.900 "base_bdevs_list": [ 00:20:01.900 { 00:20:01.900 "name": "pt1", 00:20:01.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:01.901 "is_configured": true, 00:20:01.901 "data_offset": 2048, 00:20:01.901 "data_size": 63488 00:20:01.901 }, 00:20:01.901 { 00:20:01.901 "name": null, 00:20:01.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.901 "is_configured": false, 00:20:01.901 "data_offset": 2048, 00:20:01.901 "data_size": 63488 00:20:01.901 }, 00:20:01.901 { 00:20:01.901 "name": null, 00:20:01.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:01.901 "is_configured": false, 00:20:01.901 "data_offset": 2048, 00:20:01.901 "data_size": 63488 00:20:01.901 } 00:20:01.901 ] 00:20:01.901 }' 00:20:01.901 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.901 00:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.468 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:02.468 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:02.468 00:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:02.728 [2024-07-25 00:46:25.160522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:02.728 [2024-07-25 00:46:25.160791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.728 [2024-07-25 00:46:25.160857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:02.728 [2024-07-25 00:46:25.160948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.728 [2024-07-25 00:46:25.161431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.728 [2024-07-25 00:46:25.161574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:02.728 [2024-07-25 00:46:25.161763] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:02.728 [2024-07-25 00:46:25.161860] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:02.728 pt2 00:20:02.728 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:02.728 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:02.728 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:02.989 [2024-07-25 00:46:25.424569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:02.989 [2024-07-25 00:46:25.424783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.989 [2024-07-25 00:46:25.424859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:02.989 [2024-07-25 00:46:25.424955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.989 [2024-07-25 00:46:25.425447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.989 [2024-07-25 00:46:25.425590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:02.989 [2024-07-25 00:46:25.425776] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:02.989 [2024-07-25 00:46:25.425872] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:02.989 [2024-07-25 00:46:25.426016] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:20:02.989 [2024-07-25 00:46:25.426126] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:02.989 [2024-07-25 00:46:25.426274] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:02.989 [2024-07-25 00:46:25.426725] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:20:02.989 [2024-07-25 00:46:25.426833] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:20:02.989 [2024-07-25 00:46:25.427038] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.989 pt3 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.989 "name": "raid_bdev1", 00:20:02.989 "uuid": "dcb0c8f2-b5a0-49c8-bdad-e0d14021be73", 00:20:02.989 "strip_size_kb": 64, 00:20:02.989 "state": "online", 00:20:02.989 "raid_level": "raid0", 00:20:02.989 "superblock": true, 00:20:02.989 "num_base_bdevs": 3, 00:20:02.989 "num_base_bdevs_discovered": 3, 00:20:02.989 "num_base_bdevs_operational": 3, 00:20:02.989 "base_bdevs_list": [ 00:20:02.989 { 00:20:02.989 "name": "pt1", 00:20:02.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.989 "is_configured": true, 00:20:02.989 "data_offset": 2048, 00:20:02.989 "data_size": 63488 00:20:02.989 }, 00:20:02.989 { 00:20:02.989 "name": "pt2", 00:20:02.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.989 "is_configured": true, 00:20:02.989 "data_offset": 2048, 00:20:02.989 "data_size": 63488 00:20:02.989 }, 00:20:02.989 { 00:20:02.989 "name": "pt3", 00:20:02.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:02.989 "is_configured": true, 00:20:02.989 "data_offset": 2048, 00:20:02.989 "data_size": 63488 00:20:02.989 } 00:20:02.989 ] 00:20:02.989 }' 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.989 00:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.580 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:03.581 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:03.581 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:03.581 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:03.581 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:03.581 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:03.581 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:03.581 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:03.839 [2024-07-25 00:46:26.308987] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.839 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:03.839 "name": "raid_bdev1", 00:20:03.839 "aliases": [ 00:20:03.839 "dcb0c8f2-b5a0-49c8-bdad-e0d14021be73" 00:20:03.839 ], 00:20:03.839 "product_name": "Raid Volume", 00:20:03.839 "block_size": 512, 00:20:03.839 "num_blocks": 190464, 00:20:03.839 "uuid": "dcb0c8f2-b5a0-49c8-bdad-e0d14021be73", 00:20:03.839 "assigned_rate_limits": { 00:20:03.839 "rw_ios_per_sec": 0, 00:20:03.839 "rw_mbytes_per_sec": 0, 00:20:03.839 "r_mbytes_per_sec": 0, 00:20:03.839 "w_mbytes_per_sec": 0 00:20:03.839 }, 00:20:03.839 "claimed": false, 00:20:03.839 "zoned": false, 00:20:03.839 "supported_io_types": { 00:20:03.839 "read": true, 00:20:03.839 "write": true, 00:20:03.839 "unmap": true, 00:20:03.839 "flush": true, 00:20:03.839 "reset": true, 00:20:03.839 "nvme_admin": false, 00:20:03.839 "nvme_io": false, 00:20:03.839 "nvme_io_md": false, 00:20:03.839 "write_zeroes": true, 00:20:03.839 "zcopy": false, 00:20:03.839 "get_zone_info": false, 00:20:03.839 "zone_management": false, 00:20:03.839 "zone_append": false, 00:20:03.839 "compare": false, 00:20:03.839 "compare_and_write": false, 00:20:03.839 "abort": false, 00:20:03.839 "seek_hole": false, 00:20:03.839 "seek_data": false, 00:20:03.839 "copy": false, 00:20:03.839 "nvme_iov_md": false 00:20:03.839 }, 00:20:03.839 "memory_domains": [ 00:20:03.839 { 00:20:03.839 "dma_device_id": "system", 00:20:03.839 "dma_device_type": 1 00:20:03.839 }, 00:20:03.839 { 00:20:03.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.839 "dma_device_type": 2 00:20:03.839 }, 00:20:03.839 { 00:20:03.839 "dma_device_id": "system", 00:20:03.839 "dma_device_type": 1 00:20:03.839 }, 00:20:03.839 { 00:20:03.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.839 "dma_device_type": 2 00:20:03.839 }, 00:20:03.839 { 00:20:03.839 "dma_device_id": "system", 00:20:03.839 "dma_device_type": 1 00:20:03.839 }, 00:20:03.839 { 00:20:03.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.839 "dma_device_type": 2 00:20:03.839 } 00:20:03.839 ], 00:20:03.839 "driver_specific": { 00:20:03.839 "raid": { 00:20:03.839 "uuid": "dcb0c8f2-b5a0-49c8-bdad-e0d14021be73", 00:20:03.839 "strip_size_kb": 64, 00:20:03.839 "state": "online", 00:20:03.839 "raid_level": "raid0", 00:20:03.839 "superblock": true, 00:20:03.839 "num_base_bdevs": 3, 00:20:03.839 "num_base_bdevs_discovered": 3, 00:20:03.839 "num_base_bdevs_operational": 3, 00:20:03.839 "base_bdevs_list": [ 00:20:03.839 { 00:20:03.839 "name": "pt1", 00:20:03.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.839 "is_configured": true, 00:20:03.839 "data_offset": 2048, 00:20:03.839 "data_size": 63488 00:20:03.839 }, 00:20:03.839 { 00:20:03.839 "name": "pt2", 00:20:03.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.839 "is_configured": true, 00:20:03.839 "data_offset": 2048, 00:20:03.839 "data_size": 63488 00:20:03.839 }, 00:20:03.839 { 00:20:03.839 "name": "pt3", 00:20:03.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:03.839 "is_configured": true, 00:20:03.839 "data_offset": 2048, 00:20:03.839 "data_size": 63488 00:20:03.839 } 00:20:03.839 ] 00:20:03.839 } 00:20:03.839 } 00:20:03.839 }' 00:20:03.839 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:03.839 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:03.839 pt2 00:20:03.839 pt3' 00:20:03.839 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.839 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.839 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:04.099 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:04.099 "name": "pt1", 00:20:04.099 "aliases": [ 00:20:04.099 "00000000-0000-0000-0000-000000000001" 00:20:04.099 ], 00:20:04.099 "product_name": "passthru", 00:20:04.099 "block_size": 512, 00:20:04.099 "num_blocks": 65536, 00:20:04.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:04.099 "assigned_rate_limits": { 00:20:04.099 "rw_ios_per_sec": 0, 00:20:04.099 "rw_mbytes_per_sec": 0, 00:20:04.099 "r_mbytes_per_sec": 0, 00:20:04.099 "w_mbytes_per_sec": 0 00:20:04.099 }, 00:20:04.099 "claimed": true, 00:20:04.099 "claim_type": "exclusive_write", 00:20:04.099 "zoned": false, 00:20:04.099 "supported_io_types": { 00:20:04.099 "read": true, 00:20:04.099 "write": true, 00:20:04.099 "unmap": true, 00:20:04.099 "flush": true, 00:20:04.099 "reset": true, 00:20:04.099 "nvme_admin": false, 00:20:04.099 "nvme_io": false, 00:20:04.099 "nvme_io_md": false, 00:20:04.099 "write_zeroes": true, 00:20:04.099 "zcopy": true, 00:20:04.099 "get_zone_info": false, 00:20:04.099 "zone_management": false, 00:20:04.099 "zone_append": false, 00:20:04.099 "compare": false, 00:20:04.099 "compare_and_write": false, 00:20:04.099 "abort": true, 00:20:04.099 "seek_hole": false, 00:20:04.099 "seek_data": false, 00:20:04.099 "copy": true, 00:20:04.099 "nvme_iov_md": false 00:20:04.099 }, 00:20:04.099 "memory_domains": [ 00:20:04.099 { 00:20:04.099 "dma_device_id": "system", 00:20:04.100 "dma_device_type": 1 00:20:04.100 }, 00:20:04.100 { 00:20:04.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.100 "dma_device_type": 2 00:20:04.100 } 00:20:04.100 ], 00:20:04.100 "driver_specific": { 00:20:04.100 "passthru": { 00:20:04.100 "name": "pt1", 00:20:04.100 "base_bdev_name": "malloc1" 00:20:04.100 } 00:20:04.100 } 00:20:04.100 }' 00:20:04.100 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.100 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.100 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:04.100 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.359 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.359 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:04.359 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.359 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.359 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:04.359 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.359 00:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.359 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:04.359 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:04.618 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:04.618 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:04.877 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:04.877 "name": "pt2", 00:20:04.877 "aliases": [ 00:20:04.877 "00000000-0000-0000-0000-000000000002" 00:20:04.877 ], 00:20:04.877 "product_name": "passthru", 00:20:04.877 "block_size": 512, 00:20:04.877 "num_blocks": 65536, 00:20:04.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.877 "assigned_rate_limits": { 00:20:04.877 "rw_ios_per_sec": 0, 00:20:04.877 "rw_mbytes_per_sec": 0, 00:20:04.877 "r_mbytes_per_sec": 0, 00:20:04.877 "w_mbytes_per_sec": 0 00:20:04.877 }, 00:20:04.877 "claimed": true, 00:20:04.877 "claim_type": "exclusive_write", 00:20:04.877 "zoned": false, 00:20:04.877 "supported_io_types": { 00:20:04.877 "read": true, 00:20:04.877 "write": true, 00:20:04.877 "unmap": true, 00:20:04.877 "flush": true, 00:20:04.877 "reset": true, 00:20:04.877 "nvme_admin": false, 00:20:04.877 "nvme_io": false, 00:20:04.877 "nvme_io_md": false, 00:20:04.877 "write_zeroes": true, 00:20:04.877 "zcopy": true, 00:20:04.877 "get_zone_info": false, 00:20:04.877 "zone_management": false, 00:20:04.877 "zone_append": false, 00:20:04.877 "compare": false, 00:20:04.877 "compare_and_write": false, 00:20:04.877 "abort": true, 00:20:04.877 "seek_hole": false, 00:20:04.877 "seek_data": false, 00:20:04.877 "copy": true, 00:20:04.877 "nvme_iov_md": false 00:20:04.877 }, 00:20:04.877 "memory_domains": [ 00:20:04.877 { 00:20:04.877 "dma_device_id": "system", 00:20:04.877 "dma_device_type": 1 00:20:04.877 }, 00:20:04.877 { 00:20:04.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.877 "dma_device_type": 2 00:20:04.877 } 00:20:04.877 ], 00:20:04.877 "driver_specific": { 00:20:04.877 "passthru": { 00:20:04.877 "name": "pt2", 00:20:04.877 "base_bdev_name": "malloc2" 00:20:04.877 } 00:20:04.877 } 00:20:04.877 }' 00:20:04.877 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.877 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.877 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:04.877 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.877 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.877 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:04.877 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.877 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.137 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.137 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.137 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.137 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.137 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:05.137 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:05.137 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:05.396 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:05.396 "name": "pt3", 00:20:05.396 "aliases": [ 00:20:05.396 "00000000-0000-0000-0000-000000000003" 00:20:05.396 ], 00:20:05.396 "product_name": "passthru", 00:20:05.396 "block_size": 512, 00:20:05.396 "num_blocks": 65536, 00:20:05.396 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:05.396 "assigned_rate_limits": { 00:20:05.396 "rw_ios_per_sec": 0, 00:20:05.396 "rw_mbytes_per_sec": 0, 00:20:05.396 "r_mbytes_per_sec": 0, 00:20:05.396 "w_mbytes_per_sec": 0 00:20:05.396 }, 00:20:05.396 "claimed": true, 00:20:05.396 "claim_type": "exclusive_write", 00:20:05.396 "zoned": false, 00:20:05.396 "supported_io_types": { 00:20:05.396 "read": true, 00:20:05.396 "write": true, 00:20:05.396 "unmap": true, 00:20:05.396 "flush": true, 00:20:05.396 "reset": true, 00:20:05.396 "nvme_admin": false, 00:20:05.396 "nvme_io": false, 00:20:05.396 "nvme_io_md": false, 00:20:05.396 "write_zeroes": true, 00:20:05.396 "zcopy": true, 00:20:05.396 "get_zone_info": false, 00:20:05.396 "zone_management": false, 00:20:05.396 "zone_append": false, 00:20:05.396 "compare": false, 00:20:05.396 "compare_and_write": false, 00:20:05.396 "abort": true, 00:20:05.396 "seek_hole": false, 00:20:05.396 "seek_data": false, 00:20:05.396 "copy": true, 00:20:05.396 "nvme_iov_md": false 00:20:05.396 }, 00:20:05.396 "memory_domains": [ 00:20:05.396 { 00:20:05.396 "dma_device_id": "system", 00:20:05.396 "dma_device_type": 1 00:20:05.396 }, 00:20:05.396 { 00:20:05.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.396 "dma_device_type": 2 00:20:05.396 } 00:20:05.396 ], 00:20:05.396 "driver_specific": { 00:20:05.396 "passthru": { 00:20:05.396 "name": "pt3", 00:20:05.396 "base_bdev_name": "malloc3" 00:20:05.396 } 00:20:05.396 } 00:20:05.396 }' 00:20:05.396 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.396 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.396 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:05.396 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.396 00:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.396 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:05.396 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.655 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.655 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.655 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.655 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.655 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.655 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:05.655 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:05.914 [2024-07-25 00:46:28.489379] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' dcb0c8f2-b5a0-49c8-bdad-e0d14021be73 '!=' dcb0c8f2-b5a0-49c8-bdad-e0d14021be73 ']' 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 128189 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 128189 ']' 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 128189 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128189 00:20:05.914 killing process with pid 128189 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128189' 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 128189 00:20:05.914 00:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 128189 00:20:05.914 [2024-07-25 00:46:28.534785] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:05.915 [2024-07-25 00:46:28.534854] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.915 [2024-07-25 00:46:28.534906] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.915 [2024-07-25 00:46:28.534914] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:20:06.483 [2024-07-25 00:46:28.838260] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.861 00:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:07.861 00:20:07.861 real 0m14.842s 00:20:07.861 user 0m25.782s 00:20:07.861 sys 0m2.222s 00:20:07.861 00:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:07.861 00:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.861 ************************************ 00:20:07.861 END TEST raid_superblock_test 00:20:07.861 ************************************ 00:20:07.861 00:46:30 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:20:07.861 00:46:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:07.861 00:46:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.861 00:46:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.861 ************************************ 00:20:07.861 START TEST raid_read_error_test 00:20:07.861 ************************************ 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.6wLDxZtpWO 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=128669 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 128669 /var/tmp/spdk-raid.sock 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 128669 ']' 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:07.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:07.861 00:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.862 00:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:07.862 00:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.862 00:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.862 [2024-07-25 00:46:30.309554] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:20:07.862 [2024-07-25 00:46:30.310687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128669 ] 00:20:07.862 [2024-07-25 00:46:30.490517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.121 [2024-07-25 00:46:30.674431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.380 [2024-07-25 00:46:30.866941] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.637 00:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.637 00:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:08.637 00:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:08.637 00:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:08.895 BaseBdev1_malloc 00:20:08.896 00:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:09.155 true 00:20:09.155 00:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:09.414 [2024-07-25 00:46:31.909120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:09.414 [2024-07-25 00:46:31.909402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.414 [2024-07-25 00:46:31.909484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:20:09.414 [2024-07-25 00:46:31.909702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.414 [2024-07-25 00:46:31.912120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.414 [2024-07-25 00:46:31.912298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:09.414 BaseBdev1 00:20:09.414 00:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:09.414 00:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:09.672 BaseBdev2_malloc 00:20:09.672 00:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:09.931 true 00:20:09.931 00:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:10.193 [2024-07-25 00:46:32.636668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:10.194 [2024-07-25 00:46:32.636946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.194 [2024-07-25 00:46:32.637022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:10.194 [2024-07-25 00:46:32.637248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.194 [2024-07-25 00:46:32.639510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.194 [2024-07-25 00:46:32.639676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:10.194 BaseBdev2 00:20:10.194 00:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:10.194 00:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:10.194 BaseBdev3_malloc 00:20:10.457 00:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:10.457 true 00:20:10.457 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:10.715 [2024-07-25 00:46:33.280640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:10.715 [2024-07-25 00:46:33.281074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.715 [2024-07-25 00:46:33.281307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:10.715 [2024-07-25 00:46:33.281462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.715 [2024-07-25 00:46:33.285659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.715 [2024-07-25 00:46:33.285912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:10.715 BaseBdev3 00:20:10.715 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:10.973 [2024-07-25 00:46:33.470358] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.973 [2024-07-25 00:46:33.472677] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:10.973 [2024-07-25 00:46:33.472879] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:10.973 [2024-07-25 00:46:33.473144] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:20:10.973 [2024-07-25 00:46:33.473185] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:10.973 [2024-07-25 00:46:33.473388] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:10.973 [2024-07-25 00:46:33.473830] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:20:10.973 [2024-07-25 00:46:33.473943] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:20:10.973 [2024-07-25 00:46:33.474205] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.973 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.232 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.232 "name": "raid_bdev1", 00:20:11.232 "uuid": "8a1f6027-b5e7-4c32-b062-9c9a6aee25ec", 00:20:11.232 "strip_size_kb": 64, 00:20:11.232 "state": "online", 00:20:11.232 "raid_level": "raid0", 00:20:11.232 "superblock": true, 00:20:11.232 "num_base_bdevs": 3, 00:20:11.232 "num_base_bdevs_discovered": 3, 00:20:11.232 "num_base_bdevs_operational": 3, 00:20:11.232 "base_bdevs_list": [ 00:20:11.232 { 00:20:11.232 "name": "BaseBdev1", 00:20:11.232 "uuid": "c13035e3-ee10-5526-9a5e-8a03ece07bf7", 00:20:11.232 "is_configured": true, 00:20:11.232 "data_offset": 2048, 00:20:11.232 "data_size": 63488 00:20:11.232 }, 00:20:11.232 { 00:20:11.232 "name": "BaseBdev2", 00:20:11.232 "uuid": "11ad7f94-7c6a-5a54-8a7f-effa9c69dd7c", 00:20:11.232 "is_configured": true, 00:20:11.232 "data_offset": 2048, 00:20:11.232 "data_size": 63488 00:20:11.232 }, 00:20:11.232 { 00:20:11.232 "name": "BaseBdev3", 00:20:11.232 "uuid": "2365a28d-47b6-5258-bcc0-5c794a9d2c1b", 00:20:11.232 "is_configured": true, 00:20:11.232 "data_offset": 2048, 00:20:11.232 "data_size": 63488 00:20:11.232 } 00:20:11.232 ] 00:20:11.232 }' 00:20:11.232 00:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.232 00:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.799 00:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:11.799 00:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:11.799 [2024-07-25 00:46:34.423743] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:12.737 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.996 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.254 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.255 "name": "raid_bdev1", 00:20:13.255 "uuid": "8a1f6027-b5e7-4c32-b062-9c9a6aee25ec", 00:20:13.255 "strip_size_kb": 64, 00:20:13.255 "state": "online", 00:20:13.255 "raid_level": "raid0", 00:20:13.255 "superblock": true, 00:20:13.255 "num_base_bdevs": 3, 00:20:13.255 "num_base_bdevs_discovered": 3, 00:20:13.255 "num_base_bdevs_operational": 3, 00:20:13.255 "base_bdevs_list": [ 00:20:13.255 { 00:20:13.255 "name": "BaseBdev1", 00:20:13.255 "uuid": "c13035e3-ee10-5526-9a5e-8a03ece07bf7", 00:20:13.255 "is_configured": true, 00:20:13.255 "data_offset": 2048, 00:20:13.255 "data_size": 63488 00:20:13.255 }, 00:20:13.255 { 00:20:13.255 "name": "BaseBdev2", 00:20:13.255 "uuid": "11ad7f94-7c6a-5a54-8a7f-effa9c69dd7c", 00:20:13.255 "is_configured": true, 00:20:13.255 "data_offset": 2048, 00:20:13.255 "data_size": 63488 00:20:13.255 }, 00:20:13.255 { 00:20:13.255 "name": "BaseBdev3", 00:20:13.255 "uuid": "2365a28d-47b6-5258-bcc0-5c794a9d2c1b", 00:20:13.255 "is_configured": true, 00:20:13.255 "data_offset": 2048, 00:20:13.255 "data_size": 63488 00:20:13.255 } 00:20:13.255 ] 00:20:13.255 }' 00:20:13.255 00:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.255 00:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.822 00:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:14.080 [2024-07-25 00:46:36.665250] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:14.080 [2024-07-25 00:46:36.665553] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:14.080 [2024-07-25 00:46:36.668150] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.080 [2024-07-25 00:46:36.668329] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.080 [2024-07-25 00:46:36.668401] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.080 [2024-07-25 00:46:36.668475] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:20:14.080 0 00:20:14.080 00:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 128669 00:20:14.080 00:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 128669 ']' 00:20:14.080 00:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 128669 00:20:14.080 00:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:20:14.080 00:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:14.080 00:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128669 00:20:14.080 00:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:14.080 00:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:14.080 00:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128669' 00:20:14.080 killing process with pid 128669 00:20:14.080 00:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 128669 00:20:14.080 [2024-07-25 00:46:36.710511] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:14.080 00:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 128669 00:20:14.338 [2024-07-25 00:46:36.939081] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:15.714 00:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.6wLDxZtpWO 00:20:15.714 00:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:15.714 00:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:15.714 00:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:20:15.714 00:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:15.714 00:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:15.714 00:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:15.714 00:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:20:15.714 00:20:15.714 real 0m8.155s 00:20:15.714 user 0m11.948s 00:20:15.973 sys 0m1.097s 00:20:15.973 00:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:15.973 00:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.973 ************************************ 00:20:15.973 END TEST raid_read_error_test 00:20:15.973 ************************************ 00:20:15.973 00:46:38 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:20:15.973 00:46:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:15.973 00:46:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.973 00:46:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:15.973 ************************************ 00:20:15.973 START TEST raid_write_error_test 00:20:15.973 ************************************ 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.6qsMHsgrvc 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=128876 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 128876 /var/tmp/spdk-raid.sock 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 128876 ']' 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:15.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.973 00:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.973 [2024-07-25 00:46:38.559611] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:20:15.973 [2024-07-25 00:46:38.560125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128876 ] 00:20:16.232 [2024-07-25 00:46:38.744757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.490 [2024-07-25 00:46:38.978257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.748 [2024-07-25 00:46:39.208894] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.007 00:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.007 00:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:17.007 00:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:17.007 00:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:17.265 BaseBdev1_malloc 00:20:17.265 00:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:17.522 true 00:20:17.523 00:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:17.781 [2024-07-25 00:46:40.201788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:17.781 [2024-07-25 00:46:40.202121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.781 [2024-07-25 00:46:40.202203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:20:17.781 [2024-07-25 00:46:40.202331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.781 [2024-07-25 00:46:40.204996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.781 [2024-07-25 00:46:40.205153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:17.781 BaseBdev1 00:20:17.781 00:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:17.781 00:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:18.038 BaseBdev2_malloc 00:20:18.038 00:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:18.039 true 00:20:18.039 00:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:18.296 [2024-07-25 00:46:40.782126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:18.297 [2024-07-25 00:46:40.782396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.297 [2024-07-25 00:46:40.782469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:18.297 [2024-07-25 00:46:40.782580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.297 [2024-07-25 00:46:40.785099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.297 [2024-07-25 00:46:40.785292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:18.297 BaseBdev2 00:20:18.297 00:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:18.297 00:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:18.555 BaseBdev3_malloc 00:20:18.555 00:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:18.555 true 00:20:18.555 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:18.813 [2024-07-25 00:46:41.339065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:18.813 [2024-07-25 00:46:41.339265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.813 [2024-07-25 00:46:41.339331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:18.813 [2024-07-25 00:46:41.339427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.813 [2024-07-25 00:46:41.342044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.813 [2024-07-25 00:46:41.342207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:18.813 BaseBdev3 00:20:18.813 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:19.071 [2024-07-25 00:46:41.511171] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:19.071 [2024-07-25 00:46:41.513522] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:19.071 [2024-07-25 00:46:41.513694] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:19.071 [2024-07-25 00:46:41.513940] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:20:19.071 [2024-07-25 00:46:41.513981] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:19.071 [2024-07-25 00:46:41.514154] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:19.071 [2024-07-25 00:46:41.514613] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:20:19.071 [2024-07-25 00:46:41.514713] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:20:19.071 [2024-07-25 00:46:41.514944] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.071 "name": "raid_bdev1", 00:20:19.071 "uuid": "7b36cfb7-755c-48e1-bcb0-d1a6c458d107", 00:20:19.071 "strip_size_kb": 64, 00:20:19.071 "state": "online", 00:20:19.071 "raid_level": "raid0", 00:20:19.071 "superblock": true, 00:20:19.071 "num_base_bdevs": 3, 00:20:19.071 "num_base_bdevs_discovered": 3, 00:20:19.071 "num_base_bdevs_operational": 3, 00:20:19.071 "base_bdevs_list": [ 00:20:19.071 { 00:20:19.071 "name": "BaseBdev1", 00:20:19.071 "uuid": "0f77cb96-47e2-5789-ac70-18406e3b7f66", 00:20:19.071 "is_configured": true, 00:20:19.071 "data_offset": 2048, 00:20:19.071 "data_size": 63488 00:20:19.071 }, 00:20:19.071 { 00:20:19.071 "name": "BaseBdev2", 00:20:19.071 "uuid": "5a7701be-e8d4-5bef-8e57-ab661a5fd4e1", 00:20:19.071 "is_configured": true, 00:20:19.071 "data_offset": 2048, 00:20:19.071 "data_size": 63488 00:20:19.071 }, 00:20:19.071 { 00:20:19.071 "name": "BaseBdev3", 00:20:19.071 "uuid": "dce2afca-7e95-5d6a-8421-009bbdc22284", 00:20:19.071 "is_configured": true, 00:20:19.071 "data_offset": 2048, 00:20:19.071 "data_size": 63488 00:20:19.071 } 00:20:19.071 ] 00:20:19.071 }' 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.071 00:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.636 00:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:19.636 00:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:19.636 [2024-07-25 00:46:42.256596] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:20.571 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.829 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.087 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:21.087 "name": "raid_bdev1", 00:20:21.087 "uuid": "7b36cfb7-755c-48e1-bcb0-d1a6c458d107", 00:20:21.087 "strip_size_kb": 64, 00:20:21.087 "state": "online", 00:20:21.087 "raid_level": "raid0", 00:20:21.087 "superblock": true, 00:20:21.087 "num_base_bdevs": 3, 00:20:21.087 "num_base_bdevs_discovered": 3, 00:20:21.087 "num_base_bdevs_operational": 3, 00:20:21.087 "base_bdevs_list": [ 00:20:21.087 { 00:20:21.087 "name": "BaseBdev1", 00:20:21.087 "uuid": "0f77cb96-47e2-5789-ac70-18406e3b7f66", 00:20:21.087 "is_configured": true, 00:20:21.087 "data_offset": 2048, 00:20:21.087 "data_size": 63488 00:20:21.087 }, 00:20:21.087 { 00:20:21.087 "name": "BaseBdev2", 00:20:21.087 "uuid": "5a7701be-e8d4-5bef-8e57-ab661a5fd4e1", 00:20:21.087 "is_configured": true, 00:20:21.087 "data_offset": 2048, 00:20:21.087 "data_size": 63488 00:20:21.087 }, 00:20:21.087 { 00:20:21.087 "name": "BaseBdev3", 00:20:21.087 "uuid": "dce2afca-7e95-5d6a-8421-009bbdc22284", 00:20:21.087 "is_configured": true, 00:20:21.087 "data_offset": 2048, 00:20:21.087 "data_size": 63488 00:20:21.087 } 00:20:21.087 ] 00:20:21.087 }' 00:20:21.088 00:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:21.088 00:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.655 00:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:21.914 [2024-07-25 00:46:44.429079] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:21.914 [2024-07-25 00:46:44.430360] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.914 [2024-07-25 00:46:44.432865] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.914 [2024-07-25 00:46:44.433019] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.914 [2024-07-25 00:46:44.433083] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.914 [2024-07-25 00:46:44.433155] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:20:21.914 0 00:20:21.914 00:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 128876 00:20:21.914 00:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 128876 ']' 00:20:21.914 00:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 128876 00:20:21.914 00:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:20:21.914 00:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.914 00:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128876 00:20:21.914 00:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:21.914 00:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:21.914 00:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128876' 00:20:21.914 killing process with pid 128876 00:20:21.914 00:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 128876 00:20:21.914 [2024-07-25 00:46:44.486605] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:21.914 00:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 128876 00:20:22.173 [2024-07-25 00:46:44.690015] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:23.550 00:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.6qsMHsgrvc 00:20:23.550 00:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:23.550 00:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:23.550 ************************************ 00:20:23.550 END TEST raid_write_error_test 00:20:23.550 ************************************ 00:20:23.550 00:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:20:23.550 00:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:23.550 00:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:23.551 00:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:23.551 00:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:20:23.551 00:20:23.551 real 0m7.493s 00:20:23.551 user 0m10.786s 00:20:23.551 sys 0m1.089s 00:20:23.551 00:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.551 00:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.551 00:46:45 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:23.551 00:46:45 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:20:23.551 00:46:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:23.551 00:46:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.551 00:46:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:23.551 ************************************ 00:20:23.551 START TEST raid_state_function_test 00:20:23.551 ************************************ 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=129076 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 129076' 00:20:23.551 Process raid pid: 129076 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 129076 /var/tmp/spdk-raid.sock 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 129076 ']' 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:23.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.551 00:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.551 [2024-07-25 00:46:46.081082] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:20:23.551 [2024-07-25 00:46:46.081436] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.810 [2024-07-25 00:46:46.239147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.810 [2024-07-25 00:46:46.427756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.069 [2024-07-25 00:46:46.629787] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:24.637 [2024-07-25 00:46:47.244115] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:24.637 [2024-07-25 00:46:47.244308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:24.637 [2024-07-25 00:46:47.244405] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:24.637 [2024-07-25 00:46:47.244461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:24.637 [2024-07-25 00:46:47.244536] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:24.637 [2024-07-25 00:46:47.244580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.637 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.897 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.897 "name": "Existed_Raid", 00:20:24.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.897 "strip_size_kb": 64, 00:20:24.897 "state": "configuring", 00:20:24.897 "raid_level": "concat", 00:20:24.897 "superblock": false, 00:20:24.897 "num_base_bdevs": 3, 00:20:24.897 "num_base_bdevs_discovered": 0, 00:20:24.897 "num_base_bdevs_operational": 3, 00:20:24.897 "base_bdevs_list": [ 00:20:24.897 { 00:20:24.897 "name": "BaseBdev1", 00:20:24.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.897 "is_configured": false, 00:20:24.897 "data_offset": 0, 00:20:24.897 "data_size": 0 00:20:24.897 }, 00:20:24.897 { 00:20:24.897 "name": "BaseBdev2", 00:20:24.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.897 "is_configured": false, 00:20:24.897 "data_offset": 0, 00:20:24.897 "data_size": 0 00:20:24.897 }, 00:20:24.897 { 00:20:24.897 "name": "BaseBdev3", 00:20:24.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.897 "is_configured": false, 00:20:24.897 "data_offset": 0, 00:20:24.897 "data_size": 0 00:20:24.897 } 00:20:24.897 ] 00:20:24.897 }' 00:20:24.897 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.897 00:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.464 00:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:25.723 [2024-07-25 00:46:48.160187] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:25.723 [2024-07-25 00:46:48.160357] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:25.723 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:25.723 [2024-07-25 00:46:48.324218] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:25.723 [2024-07-25 00:46:48.324372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:25.723 [2024-07-25 00:46:48.324461] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:25.723 [2024-07-25 00:46:48.324507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:25.723 [2024-07-25 00:46:48.324533] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:25.723 [2024-07-25 00:46:48.324578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:25.723 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:25.983 [2024-07-25 00:46:48.579474] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.983 BaseBdev1 00:20:25.983 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:25.983 00:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:25.983 00:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:25.983 00:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:25.983 00:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:25.983 00:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:25.983 00:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:26.241 00:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:26.499 [ 00:20:26.499 { 00:20:26.499 "name": "BaseBdev1", 00:20:26.499 "aliases": [ 00:20:26.499 "1ffab77a-8bf4-41c2-9286-1900db9e5aac" 00:20:26.499 ], 00:20:26.499 "product_name": "Malloc disk", 00:20:26.499 "block_size": 512, 00:20:26.499 "num_blocks": 65536, 00:20:26.499 "uuid": "1ffab77a-8bf4-41c2-9286-1900db9e5aac", 00:20:26.499 "assigned_rate_limits": { 00:20:26.499 "rw_ios_per_sec": 0, 00:20:26.499 "rw_mbytes_per_sec": 0, 00:20:26.499 "r_mbytes_per_sec": 0, 00:20:26.499 "w_mbytes_per_sec": 0 00:20:26.499 }, 00:20:26.499 "claimed": true, 00:20:26.499 "claim_type": "exclusive_write", 00:20:26.499 "zoned": false, 00:20:26.499 "supported_io_types": { 00:20:26.499 "read": true, 00:20:26.499 "write": true, 00:20:26.499 "unmap": true, 00:20:26.499 "flush": true, 00:20:26.499 "reset": true, 00:20:26.499 "nvme_admin": false, 00:20:26.499 "nvme_io": false, 00:20:26.499 "nvme_io_md": false, 00:20:26.499 "write_zeroes": true, 00:20:26.499 "zcopy": true, 00:20:26.499 "get_zone_info": false, 00:20:26.499 "zone_management": false, 00:20:26.499 "zone_append": false, 00:20:26.499 "compare": false, 00:20:26.499 "compare_and_write": false, 00:20:26.499 "abort": true, 00:20:26.499 "seek_hole": false, 00:20:26.499 "seek_data": false, 00:20:26.499 "copy": true, 00:20:26.499 "nvme_iov_md": false 00:20:26.499 }, 00:20:26.499 "memory_domains": [ 00:20:26.499 { 00:20:26.499 "dma_device_id": "system", 00:20:26.499 "dma_device_type": 1 00:20:26.499 }, 00:20:26.499 { 00:20:26.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.499 "dma_device_type": 2 00:20:26.499 } 00:20:26.499 ], 00:20:26.499 "driver_specific": {} 00:20:26.499 } 00:20:26.499 ] 00:20:26.499 00:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:26.499 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:26.499 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:26.499 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:26.499 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:26.500 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:26.500 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:26.500 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.500 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.500 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.500 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.500 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.500 00:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.500 00:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.500 "name": "Existed_Raid", 00:20:26.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.500 "strip_size_kb": 64, 00:20:26.500 "state": "configuring", 00:20:26.500 "raid_level": "concat", 00:20:26.500 "superblock": false, 00:20:26.500 "num_base_bdevs": 3, 00:20:26.500 "num_base_bdevs_discovered": 1, 00:20:26.500 "num_base_bdevs_operational": 3, 00:20:26.500 "base_bdevs_list": [ 00:20:26.500 { 00:20:26.500 "name": "BaseBdev1", 00:20:26.500 "uuid": "1ffab77a-8bf4-41c2-9286-1900db9e5aac", 00:20:26.500 "is_configured": true, 00:20:26.500 "data_offset": 0, 00:20:26.500 "data_size": 65536 00:20:26.500 }, 00:20:26.500 { 00:20:26.500 "name": "BaseBdev2", 00:20:26.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.500 "is_configured": false, 00:20:26.500 "data_offset": 0, 00:20:26.500 "data_size": 0 00:20:26.500 }, 00:20:26.500 { 00:20:26.500 "name": "BaseBdev3", 00:20:26.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.500 "is_configured": false, 00:20:26.500 "data_offset": 0, 00:20:26.500 "data_size": 0 00:20:26.500 } 00:20:26.500 ] 00:20:26.500 }' 00:20:26.500 00:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.500 00:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.082 00:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:27.341 [2024-07-25 00:46:49.943737] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:27.341 [2024-07-25 00:46:49.943893] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:27.341 00:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:27.600 [2024-07-25 00:46:50.223802] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:27.600 [2024-07-25 00:46:50.225765] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:27.601 [2024-07-25 00:46:50.225951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:27.601 [2024-07-25 00:46:50.226087] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:27.601 [2024-07-25 00:46:50.226160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.601 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.859 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.859 "name": "Existed_Raid", 00:20:27.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.859 "strip_size_kb": 64, 00:20:27.859 "state": "configuring", 00:20:27.859 "raid_level": "concat", 00:20:27.859 "superblock": false, 00:20:27.859 "num_base_bdevs": 3, 00:20:27.859 "num_base_bdevs_discovered": 1, 00:20:27.859 "num_base_bdevs_operational": 3, 00:20:27.859 "base_bdevs_list": [ 00:20:27.859 { 00:20:27.859 "name": "BaseBdev1", 00:20:27.859 "uuid": "1ffab77a-8bf4-41c2-9286-1900db9e5aac", 00:20:27.859 "is_configured": true, 00:20:27.859 "data_offset": 0, 00:20:27.859 "data_size": 65536 00:20:27.859 }, 00:20:27.859 { 00:20:27.860 "name": "BaseBdev2", 00:20:27.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.860 "is_configured": false, 00:20:27.860 "data_offset": 0, 00:20:27.860 "data_size": 0 00:20:27.860 }, 00:20:27.860 { 00:20:27.860 "name": "BaseBdev3", 00:20:27.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.860 "is_configured": false, 00:20:27.860 "data_offset": 0, 00:20:27.860 "data_size": 0 00:20:27.860 } 00:20:27.860 ] 00:20:27.860 }' 00:20:27.860 00:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.860 00:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.428 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:28.686 [2024-07-25 00:46:51.335195] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:28.686 BaseBdev2 00:20:28.945 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:28.945 00:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:28.945 00:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:28.945 00:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:28.945 00:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:28.945 00:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:28.945 00:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:28.945 00:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:29.203 [ 00:20:29.203 { 00:20:29.203 "name": "BaseBdev2", 00:20:29.203 "aliases": [ 00:20:29.203 "4505f87c-cb26-4686-8f2f-fa96df130eab" 00:20:29.203 ], 00:20:29.203 "product_name": "Malloc disk", 00:20:29.203 "block_size": 512, 00:20:29.203 "num_blocks": 65536, 00:20:29.203 "uuid": "4505f87c-cb26-4686-8f2f-fa96df130eab", 00:20:29.203 "assigned_rate_limits": { 00:20:29.203 "rw_ios_per_sec": 0, 00:20:29.203 "rw_mbytes_per_sec": 0, 00:20:29.203 "r_mbytes_per_sec": 0, 00:20:29.203 "w_mbytes_per_sec": 0 00:20:29.203 }, 00:20:29.203 "claimed": true, 00:20:29.203 "claim_type": "exclusive_write", 00:20:29.203 "zoned": false, 00:20:29.203 "supported_io_types": { 00:20:29.203 "read": true, 00:20:29.203 "write": true, 00:20:29.203 "unmap": true, 00:20:29.203 "flush": true, 00:20:29.203 "reset": true, 00:20:29.203 "nvme_admin": false, 00:20:29.203 "nvme_io": false, 00:20:29.203 "nvme_io_md": false, 00:20:29.203 "write_zeroes": true, 00:20:29.203 "zcopy": true, 00:20:29.203 "get_zone_info": false, 00:20:29.203 "zone_management": false, 00:20:29.203 "zone_append": false, 00:20:29.203 "compare": false, 00:20:29.203 "compare_and_write": false, 00:20:29.203 "abort": true, 00:20:29.203 "seek_hole": false, 00:20:29.204 "seek_data": false, 00:20:29.204 "copy": true, 00:20:29.204 "nvme_iov_md": false 00:20:29.204 }, 00:20:29.204 "memory_domains": [ 00:20:29.204 { 00:20:29.204 "dma_device_id": "system", 00:20:29.204 "dma_device_type": 1 00:20:29.204 }, 00:20:29.204 { 00:20:29.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.204 "dma_device_type": 2 00:20:29.204 } 00:20:29.204 ], 00:20:29.204 "driver_specific": {} 00:20:29.204 } 00:20:29.204 ] 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.204 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.462 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.462 "name": "Existed_Raid", 00:20:29.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.462 "strip_size_kb": 64, 00:20:29.462 "state": "configuring", 00:20:29.462 "raid_level": "concat", 00:20:29.462 "superblock": false, 00:20:29.462 "num_base_bdevs": 3, 00:20:29.462 "num_base_bdevs_discovered": 2, 00:20:29.463 "num_base_bdevs_operational": 3, 00:20:29.463 "base_bdevs_list": [ 00:20:29.463 { 00:20:29.463 "name": "BaseBdev1", 00:20:29.463 "uuid": "1ffab77a-8bf4-41c2-9286-1900db9e5aac", 00:20:29.463 "is_configured": true, 00:20:29.463 "data_offset": 0, 00:20:29.463 "data_size": 65536 00:20:29.463 }, 00:20:29.463 { 00:20:29.463 "name": "BaseBdev2", 00:20:29.463 "uuid": "4505f87c-cb26-4686-8f2f-fa96df130eab", 00:20:29.463 "is_configured": true, 00:20:29.463 "data_offset": 0, 00:20:29.463 "data_size": 65536 00:20:29.463 }, 00:20:29.463 { 00:20:29.463 "name": "BaseBdev3", 00:20:29.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.463 "is_configured": false, 00:20:29.463 "data_offset": 0, 00:20:29.463 "data_size": 0 00:20:29.463 } 00:20:29.463 ] 00:20:29.463 }' 00:20:29.463 00:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.463 00:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.030 00:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:30.030 [2024-07-25 00:46:52.668718] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:30.030 [2024-07-25 00:46:52.668925] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:20:30.030 [2024-07-25 00:46:52.668965] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:30.030 [2024-07-25 00:46:52.669147] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:30.030 [2024-07-25 00:46:52.669563] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:20:30.030 [2024-07-25 00:46:52.669692] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:20:30.030 [2024-07-25 00:46:52.669985] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.030 BaseBdev3 00:20:30.288 00:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:30.288 00:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:30.288 00:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:30.288 00:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:30.288 00:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:30.288 00:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:30.288 00:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:30.288 00:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:30.547 [ 00:20:30.547 { 00:20:30.547 "name": "BaseBdev3", 00:20:30.547 "aliases": [ 00:20:30.547 "ed608a33-1cbf-4a14-aa5b-39e181daf25e" 00:20:30.547 ], 00:20:30.547 "product_name": "Malloc disk", 00:20:30.547 "block_size": 512, 00:20:30.547 "num_blocks": 65536, 00:20:30.547 "uuid": "ed608a33-1cbf-4a14-aa5b-39e181daf25e", 00:20:30.547 "assigned_rate_limits": { 00:20:30.547 "rw_ios_per_sec": 0, 00:20:30.547 "rw_mbytes_per_sec": 0, 00:20:30.547 "r_mbytes_per_sec": 0, 00:20:30.547 "w_mbytes_per_sec": 0 00:20:30.547 }, 00:20:30.547 "claimed": true, 00:20:30.547 "claim_type": "exclusive_write", 00:20:30.547 "zoned": false, 00:20:30.547 "supported_io_types": { 00:20:30.547 "read": true, 00:20:30.547 "write": true, 00:20:30.547 "unmap": true, 00:20:30.547 "flush": true, 00:20:30.547 "reset": true, 00:20:30.547 "nvme_admin": false, 00:20:30.547 "nvme_io": false, 00:20:30.547 "nvme_io_md": false, 00:20:30.547 "write_zeroes": true, 00:20:30.547 "zcopy": true, 00:20:30.547 "get_zone_info": false, 00:20:30.547 "zone_management": false, 00:20:30.547 "zone_append": false, 00:20:30.547 "compare": false, 00:20:30.547 "compare_and_write": false, 00:20:30.547 "abort": true, 00:20:30.547 "seek_hole": false, 00:20:30.547 "seek_data": false, 00:20:30.547 "copy": true, 00:20:30.547 "nvme_iov_md": false 00:20:30.547 }, 00:20:30.547 "memory_domains": [ 00:20:30.547 { 00:20:30.547 "dma_device_id": "system", 00:20:30.547 "dma_device_type": 1 00:20:30.547 }, 00:20:30.547 { 00:20:30.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.548 "dma_device_type": 2 00:20:30.548 } 00:20:30.548 ], 00:20:30.548 "driver_specific": {} 00:20:30.548 } 00:20:30.548 ] 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.548 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.807 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.807 "name": "Existed_Raid", 00:20:30.807 "uuid": "0656b712-a491-49d8-a583-68f74c828a0d", 00:20:30.807 "strip_size_kb": 64, 00:20:30.807 "state": "online", 00:20:30.807 "raid_level": "concat", 00:20:30.807 "superblock": false, 00:20:30.807 "num_base_bdevs": 3, 00:20:30.807 "num_base_bdevs_discovered": 3, 00:20:30.807 "num_base_bdevs_operational": 3, 00:20:30.807 "base_bdevs_list": [ 00:20:30.807 { 00:20:30.807 "name": "BaseBdev1", 00:20:30.807 "uuid": "1ffab77a-8bf4-41c2-9286-1900db9e5aac", 00:20:30.807 "is_configured": true, 00:20:30.807 "data_offset": 0, 00:20:30.807 "data_size": 65536 00:20:30.807 }, 00:20:30.807 { 00:20:30.807 "name": "BaseBdev2", 00:20:30.807 "uuid": "4505f87c-cb26-4686-8f2f-fa96df130eab", 00:20:30.807 "is_configured": true, 00:20:30.807 "data_offset": 0, 00:20:30.807 "data_size": 65536 00:20:30.807 }, 00:20:30.807 { 00:20:30.807 "name": "BaseBdev3", 00:20:30.807 "uuid": "ed608a33-1cbf-4a14-aa5b-39e181daf25e", 00:20:30.807 "is_configured": true, 00:20:30.807 "data_offset": 0, 00:20:30.807 "data_size": 65536 00:20:30.807 } 00:20:30.807 ] 00:20:30.807 }' 00:20:30.807 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.807 00:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.374 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:31.374 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:31.374 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:31.374 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:31.374 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:31.374 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:31.374 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:31.374 00:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:31.633 [2024-07-25 00:46:54.173210] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.633 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:31.633 "name": "Existed_Raid", 00:20:31.633 "aliases": [ 00:20:31.633 "0656b712-a491-49d8-a583-68f74c828a0d" 00:20:31.633 ], 00:20:31.633 "product_name": "Raid Volume", 00:20:31.633 "block_size": 512, 00:20:31.633 "num_blocks": 196608, 00:20:31.633 "uuid": "0656b712-a491-49d8-a583-68f74c828a0d", 00:20:31.633 "assigned_rate_limits": { 00:20:31.633 "rw_ios_per_sec": 0, 00:20:31.633 "rw_mbytes_per_sec": 0, 00:20:31.633 "r_mbytes_per_sec": 0, 00:20:31.633 "w_mbytes_per_sec": 0 00:20:31.633 }, 00:20:31.633 "claimed": false, 00:20:31.633 "zoned": false, 00:20:31.633 "supported_io_types": { 00:20:31.633 "read": true, 00:20:31.633 "write": true, 00:20:31.633 "unmap": true, 00:20:31.633 "flush": true, 00:20:31.633 "reset": true, 00:20:31.633 "nvme_admin": false, 00:20:31.633 "nvme_io": false, 00:20:31.633 "nvme_io_md": false, 00:20:31.633 "write_zeroes": true, 00:20:31.633 "zcopy": false, 00:20:31.633 "get_zone_info": false, 00:20:31.633 "zone_management": false, 00:20:31.633 "zone_append": false, 00:20:31.633 "compare": false, 00:20:31.633 "compare_and_write": false, 00:20:31.633 "abort": false, 00:20:31.633 "seek_hole": false, 00:20:31.633 "seek_data": false, 00:20:31.633 "copy": false, 00:20:31.634 "nvme_iov_md": false 00:20:31.634 }, 00:20:31.634 "memory_domains": [ 00:20:31.634 { 00:20:31.634 "dma_device_id": "system", 00:20:31.634 "dma_device_type": 1 00:20:31.634 }, 00:20:31.634 { 00:20:31.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.634 "dma_device_type": 2 00:20:31.634 }, 00:20:31.634 { 00:20:31.634 "dma_device_id": "system", 00:20:31.634 "dma_device_type": 1 00:20:31.634 }, 00:20:31.634 { 00:20:31.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.634 "dma_device_type": 2 00:20:31.634 }, 00:20:31.634 { 00:20:31.634 "dma_device_id": "system", 00:20:31.634 "dma_device_type": 1 00:20:31.634 }, 00:20:31.634 { 00:20:31.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.634 "dma_device_type": 2 00:20:31.634 } 00:20:31.634 ], 00:20:31.634 "driver_specific": { 00:20:31.634 "raid": { 00:20:31.634 "uuid": "0656b712-a491-49d8-a583-68f74c828a0d", 00:20:31.634 "strip_size_kb": 64, 00:20:31.634 "state": "online", 00:20:31.634 "raid_level": "concat", 00:20:31.634 "superblock": false, 00:20:31.634 "num_base_bdevs": 3, 00:20:31.634 "num_base_bdevs_discovered": 3, 00:20:31.634 "num_base_bdevs_operational": 3, 00:20:31.634 "base_bdevs_list": [ 00:20:31.634 { 00:20:31.634 "name": "BaseBdev1", 00:20:31.634 "uuid": "1ffab77a-8bf4-41c2-9286-1900db9e5aac", 00:20:31.634 "is_configured": true, 00:20:31.634 "data_offset": 0, 00:20:31.634 "data_size": 65536 00:20:31.634 }, 00:20:31.634 { 00:20:31.634 "name": "BaseBdev2", 00:20:31.634 "uuid": "4505f87c-cb26-4686-8f2f-fa96df130eab", 00:20:31.634 "is_configured": true, 00:20:31.634 "data_offset": 0, 00:20:31.634 "data_size": 65536 00:20:31.634 }, 00:20:31.634 { 00:20:31.634 "name": "BaseBdev3", 00:20:31.634 "uuid": "ed608a33-1cbf-4a14-aa5b-39e181daf25e", 00:20:31.634 "is_configured": true, 00:20:31.634 "data_offset": 0, 00:20:31.634 "data_size": 65536 00:20:31.634 } 00:20:31.634 ] 00:20:31.634 } 00:20:31.634 } 00:20:31.634 }' 00:20:31.634 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:31.634 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:31.634 BaseBdev2 00:20:31.634 BaseBdev3' 00:20:31.634 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.634 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:31.634 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.892 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.892 "name": "BaseBdev1", 00:20:31.892 "aliases": [ 00:20:31.892 "1ffab77a-8bf4-41c2-9286-1900db9e5aac" 00:20:31.892 ], 00:20:31.892 "product_name": "Malloc disk", 00:20:31.892 "block_size": 512, 00:20:31.892 "num_blocks": 65536, 00:20:31.892 "uuid": "1ffab77a-8bf4-41c2-9286-1900db9e5aac", 00:20:31.892 "assigned_rate_limits": { 00:20:31.892 "rw_ios_per_sec": 0, 00:20:31.892 "rw_mbytes_per_sec": 0, 00:20:31.892 "r_mbytes_per_sec": 0, 00:20:31.892 "w_mbytes_per_sec": 0 00:20:31.892 }, 00:20:31.892 "claimed": true, 00:20:31.892 "claim_type": "exclusive_write", 00:20:31.892 "zoned": false, 00:20:31.892 "supported_io_types": { 00:20:31.892 "read": true, 00:20:31.892 "write": true, 00:20:31.892 "unmap": true, 00:20:31.892 "flush": true, 00:20:31.892 "reset": true, 00:20:31.892 "nvme_admin": false, 00:20:31.892 "nvme_io": false, 00:20:31.892 "nvme_io_md": false, 00:20:31.892 "write_zeroes": true, 00:20:31.892 "zcopy": true, 00:20:31.892 "get_zone_info": false, 00:20:31.892 "zone_management": false, 00:20:31.892 "zone_append": false, 00:20:31.892 "compare": false, 00:20:31.892 "compare_and_write": false, 00:20:31.892 "abort": true, 00:20:31.892 "seek_hole": false, 00:20:31.892 "seek_data": false, 00:20:31.892 "copy": true, 00:20:31.892 "nvme_iov_md": false 00:20:31.892 }, 00:20:31.892 "memory_domains": [ 00:20:31.892 { 00:20:31.892 "dma_device_id": "system", 00:20:31.892 "dma_device_type": 1 00:20:31.892 }, 00:20:31.892 { 00:20:31.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.892 "dma_device_type": 2 00:20:31.892 } 00:20:31.892 ], 00:20:31.892 "driver_specific": {} 00:20:31.892 }' 00:20:31.892 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.892 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.150 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.150 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.150 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.150 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.150 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.150 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.151 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.151 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.409 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.409 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:32.409 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:32.409 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:32.409 00:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:32.668 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:32.668 "name": "BaseBdev2", 00:20:32.668 "aliases": [ 00:20:32.668 "4505f87c-cb26-4686-8f2f-fa96df130eab" 00:20:32.668 ], 00:20:32.668 "product_name": "Malloc disk", 00:20:32.668 "block_size": 512, 00:20:32.668 "num_blocks": 65536, 00:20:32.668 "uuid": "4505f87c-cb26-4686-8f2f-fa96df130eab", 00:20:32.668 "assigned_rate_limits": { 00:20:32.668 "rw_ios_per_sec": 0, 00:20:32.668 "rw_mbytes_per_sec": 0, 00:20:32.668 "r_mbytes_per_sec": 0, 00:20:32.668 "w_mbytes_per_sec": 0 00:20:32.668 }, 00:20:32.668 "claimed": true, 00:20:32.668 "claim_type": "exclusive_write", 00:20:32.668 "zoned": false, 00:20:32.668 "supported_io_types": { 00:20:32.668 "read": true, 00:20:32.668 "write": true, 00:20:32.668 "unmap": true, 00:20:32.668 "flush": true, 00:20:32.668 "reset": true, 00:20:32.668 "nvme_admin": false, 00:20:32.668 "nvme_io": false, 00:20:32.668 "nvme_io_md": false, 00:20:32.668 "write_zeroes": true, 00:20:32.668 "zcopy": true, 00:20:32.668 "get_zone_info": false, 00:20:32.668 "zone_management": false, 00:20:32.668 "zone_append": false, 00:20:32.668 "compare": false, 00:20:32.668 "compare_and_write": false, 00:20:32.668 "abort": true, 00:20:32.668 "seek_hole": false, 00:20:32.668 "seek_data": false, 00:20:32.668 "copy": true, 00:20:32.668 "nvme_iov_md": false 00:20:32.668 }, 00:20:32.668 "memory_domains": [ 00:20:32.668 { 00:20:32.668 "dma_device_id": "system", 00:20:32.668 "dma_device_type": 1 00:20:32.668 }, 00:20:32.668 { 00:20:32.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.668 "dma_device_type": 2 00:20:32.668 } 00:20:32.668 ], 00:20:32.668 "driver_specific": {} 00:20:32.668 }' 00:20:32.668 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.668 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.668 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.668 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.668 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.668 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.668 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.927 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.927 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.927 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.927 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.927 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:32.927 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:32.927 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:32.927 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:33.186 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:33.186 "name": "BaseBdev3", 00:20:33.186 "aliases": [ 00:20:33.186 "ed608a33-1cbf-4a14-aa5b-39e181daf25e" 00:20:33.186 ], 00:20:33.186 "product_name": "Malloc disk", 00:20:33.186 "block_size": 512, 00:20:33.186 "num_blocks": 65536, 00:20:33.186 "uuid": "ed608a33-1cbf-4a14-aa5b-39e181daf25e", 00:20:33.186 "assigned_rate_limits": { 00:20:33.186 "rw_ios_per_sec": 0, 00:20:33.186 "rw_mbytes_per_sec": 0, 00:20:33.186 "r_mbytes_per_sec": 0, 00:20:33.186 "w_mbytes_per_sec": 0 00:20:33.186 }, 00:20:33.186 "claimed": true, 00:20:33.186 "claim_type": "exclusive_write", 00:20:33.186 "zoned": false, 00:20:33.186 "supported_io_types": { 00:20:33.186 "read": true, 00:20:33.186 "write": true, 00:20:33.186 "unmap": true, 00:20:33.186 "flush": true, 00:20:33.186 "reset": true, 00:20:33.186 "nvme_admin": false, 00:20:33.186 "nvme_io": false, 00:20:33.186 "nvme_io_md": false, 00:20:33.186 "write_zeroes": true, 00:20:33.186 "zcopy": true, 00:20:33.186 "get_zone_info": false, 00:20:33.186 "zone_management": false, 00:20:33.186 "zone_append": false, 00:20:33.186 "compare": false, 00:20:33.186 "compare_and_write": false, 00:20:33.186 "abort": true, 00:20:33.186 "seek_hole": false, 00:20:33.186 "seek_data": false, 00:20:33.186 "copy": true, 00:20:33.186 "nvme_iov_md": false 00:20:33.186 }, 00:20:33.186 "memory_domains": [ 00:20:33.186 { 00:20:33.186 "dma_device_id": "system", 00:20:33.186 "dma_device_type": 1 00:20:33.186 }, 00:20:33.186 { 00:20:33.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.186 "dma_device_type": 2 00:20:33.186 } 00:20:33.186 ], 00:20:33.186 "driver_specific": {} 00:20:33.186 }' 00:20:33.186 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.186 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.186 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:33.186 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.186 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.186 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:33.186 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.445 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.445 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:33.445 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.445 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.445 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:33.445 00:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:33.704 [2024-07-25 00:46:56.125460] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:33.704 [2024-07-25 00:46:56.125606] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.704 [2024-07-25 00:46:56.125723] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.704 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.963 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.963 "name": "Existed_Raid", 00:20:33.963 "uuid": "0656b712-a491-49d8-a583-68f74c828a0d", 00:20:33.963 "strip_size_kb": 64, 00:20:33.963 "state": "offline", 00:20:33.963 "raid_level": "concat", 00:20:33.963 "superblock": false, 00:20:33.963 "num_base_bdevs": 3, 00:20:33.963 "num_base_bdevs_discovered": 2, 00:20:33.963 "num_base_bdevs_operational": 2, 00:20:33.963 "base_bdevs_list": [ 00:20:33.963 { 00:20:33.963 "name": null, 00:20:33.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.963 "is_configured": false, 00:20:33.963 "data_offset": 0, 00:20:33.963 "data_size": 65536 00:20:33.963 }, 00:20:33.963 { 00:20:33.963 "name": "BaseBdev2", 00:20:33.963 "uuid": "4505f87c-cb26-4686-8f2f-fa96df130eab", 00:20:33.963 "is_configured": true, 00:20:33.963 "data_offset": 0, 00:20:33.963 "data_size": 65536 00:20:33.963 }, 00:20:33.963 { 00:20:33.963 "name": "BaseBdev3", 00:20:33.963 "uuid": "ed608a33-1cbf-4a14-aa5b-39e181daf25e", 00:20:33.963 "is_configured": true, 00:20:33.963 "data_offset": 0, 00:20:33.963 "data_size": 65536 00:20:33.963 } 00:20:33.963 ] 00:20:33.963 }' 00:20:33.963 00:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.963 00:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.530 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:34.530 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:34.530 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.530 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:34.788 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:34.788 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:34.788 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:35.048 [2024-07-25 00:46:57.526259] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:35.048 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:35.048 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:35.048 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.048 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:35.306 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:35.306 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:35.306 00:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:35.564 [2024-07-25 00:46:58.114841] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:35.564 [2024-07-25 00:46:58.115000] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:20:35.822 00:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:35.822 00:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:35.822 00:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.822 00:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:36.080 BaseBdev2 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:36.080 00:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.338 00:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:36.596 [ 00:20:36.596 { 00:20:36.596 "name": "BaseBdev2", 00:20:36.596 "aliases": [ 00:20:36.596 "9022d05d-5ec5-4055-ba73-4c4df7efaba8" 00:20:36.596 ], 00:20:36.596 "product_name": "Malloc disk", 00:20:36.596 "block_size": 512, 00:20:36.596 "num_blocks": 65536, 00:20:36.596 "uuid": "9022d05d-5ec5-4055-ba73-4c4df7efaba8", 00:20:36.596 "assigned_rate_limits": { 00:20:36.596 "rw_ios_per_sec": 0, 00:20:36.596 "rw_mbytes_per_sec": 0, 00:20:36.596 "r_mbytes_per_sec": 0, 00:20:36.596 "w_mbytes_per_sec": 0 00:20:36.596 }, 00:20:36.596 "claimed": false, 00:20:36.596 "zoned": false, 00:20:36.596 "supported_io_types": { 00:20:36.596 "read": true, 00:20:36.596 "write": true, 00:20:36.596 "unmap": true, 00:20:36.596 "flush": true, 00:20:36.596 "reset": true, 00:20:36.596 "nvme_admin": false, 00:20:36.596 "nvme_io": false, 00:20:36.596 "nvme_io_md": false, 00:20:36.596 "write_zeroes": true, 00:20:36.596 "zcopy": true, 00:20:36.596 "get_zone_info": false, 00:20:36.596 "zone_management": false, 00:20:36.596 "zone_append": false, 00:20:36.596 "compare": false, 00:20:36.596 "compare_and_write": false, 00:20:36.596 "abort": true, 00:20:36.596 "seek_hole": false, 00:20:36.596 "seek_data": false, 00:20:36.596 "copy": true, 00:20:36.596 "nvme_iov_md": false 00:20:36.596 }, 00:20:36.596 "memory_domains": [ 00:20:36.596 { 00:20:36.596 "dma_device_id": "system", 00:20:36.596 "dma_device_type": 1 00:20:36.596 }, 00:20:36.596 { 00:20:36.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.596 "dma_device_type": 2 00:20:36.596 } 00:20:36.596 ], 00:20:36.596 "driver_specific": {} 00:20:36.596 } 00:20:36.596 ] 00:20:36.596 00:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:36.596 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:36.596 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:36.596 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:36.596 BaseBdev3 00:20:36.854 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:36.854 00:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:36.854 00:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:36.854 00:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:36.854 00:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:36.854 00:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:36.854 00:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.854 00:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:37.112 [ 00:20:37.112 { 00:20:37.112 "name": "BaseBdev3", 00:20:37.112 "aliases": [ 00:20:37.112 "b4af2455-2645-4ada-89a6-2f1ec4eab463" 00:20:37.112 ], 00:20:37.112 "product_name": "Malloc disk", 00:20:37.112 "block_size": 512, 00:20:37.112 "num_blocks": 65536, 00:20:37.112 "uuid": "b4af2455-2645-4ada-89a6-2f1ec4eab463", 00:20:37.112 "assigned_rate_limits": { 00:20:37.112 "rw_ios_per_sec": 0, 00:20:37.112 "rw_mbytes_per_sec": 0, 00:20:37.112 "r_mbytes_per_sec": 0, 00:20:37.112 "w_mbytes_per_sec": 0 00:20:37.112 }, 00:20:37.112 "claimed": false, 00:20:37.112 "zoned": false, 00:20:37.112 "supported_io_types": { 00:20:37.112 "read": true, 00:20:37.112 "write": true, 00:20:37.112 "unmap": true, 00:20:37.112 "flush": true, 00:20:37.112 "reset": true, 00:20:37.112 "nvme_admin": false, 00:20:37.112 "nvme_io": false, 00:20:37.112 "nvme_io_md": false, 00:20:37.112 "write_zeroes": true, 00:20:37.112 "zcopy": true, 00:20:37.112 "get_zone_info": false, 00:20:37.112 "zone_management": false, 00:20:37.112 "zone_append": false, 00:20:37.112 "compare": false, 00:20:37.112 "compare_and_write": false, 00:20:37.112 "abort": true, 00:20:37.113 "seek_hole": false, 00:20:37.113 "seek_data": false, 00:20:37.113 "copy": true, 00:20:37.113 "nvme_iov_md": false 00:20:37.113 }, 00:20:37.113 "memory_domains": [ 00:20:37.113 { 00:20:37.113 "dma_device_id": "system", 00:20:37.113 "dma_device_type": 1 00:20:37.113 }, 00:20:37.113 { 00:20:37.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.113 "dma_device_type": 2 00:20:37.113 } 00:20:37.113 ], 00:20:37.113 "driver_specific": {} 00:20:37.113 } 00:20:37.113 ] 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:37.113 [2024-07-25 00:46:59.738615] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:37.113 [2024-07-25 00:46:59.740535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:37.113 [2024-07-25 00:46:59.740942] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.113 [2024-07-25 00:46:59.746708] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.113 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.371 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.371 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.371 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.371 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.371 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.371 "name": "Existed_Raid", 00:20:37.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.371 "strip_size_kb": 64, 00:20:37.371 "state": "configuring", 00:20:37.371 "raid_level": "concat", 00:20:37.371 "superblock": false, 00:20:37.371 "num_base_bdevs": 3, 00:20:37.371 "num_base_bdevs_discovered": 2, 00:20:37.371 "num_base_bdevs_operational": 3, 00:20:37.371 "base_bdevs_list": [ 00:20:37.371 { 00:20:37.371 "name": "BaseBdev1", 00:20:37.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.371 "is_configured": false, 00:20:37.371 "data_offset": 0, 00:20:37.371 "data_size": 0 00:20:37.371 }, 00:20:37.371 { 00:20:37.371 "name": "BaseBdev2", 00:20:37.371 "uuid": "9022d05d-5ec5-4055-ba73-4c4df7efaba8", 00:20:37.371 "is_configured": true, 00:20:37.371 "data_offset": 0, 00:20:37.371 "data_size": 65536 00:20:37.371 }, 00:20:37.371 { 00:20:37.371 "name": "BaseBdev3", 00:20:37.371 "uuid": "b4af2455-2645-4ada-89a6-2f1ec4eab463", 00:20:37.371 "is_configured": true, 00:20:37.371 "data_offset": 0, 00:20:37.371 "data_size": 65536 00:20:37.371 } 00:20:37.371 ] 00:20:37.371 }' 00:20:37.371 00:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.371 00:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.938 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:38.197 [2024-07-25 00:47:00.775428] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.197 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.455 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.455 "name": "Existed_Raid", 00:20:38.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.455 "strip_size_kb": 64, 00:20:38.455 "state": "configuring", 00:20:38.455 "raid_level": "concat", 00:20:38.455 "superblock": false, 00:20:38.455 "num_base_bdevs": 3, 00:20:38.455 "num_base_bdevs_discovered": 1, 00:20:38.455 "num_base_bdevs_operational": 3, 00:20:38.455 "base_bdevs_list": [ 00:20:38.455 { 00:20:38.455 "name": "BaseBdev1", 00:20:38.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.455 "is_configured": false, 00:20:38.455 "data_offset": 0, 00:20:38.455 "data_size": 0 00:20:38.455 }, 00:20:38.455 { 00:20:38.455 "name": null, 00:20:38.455 "uuid": "9022d05d-5ec5-4055-ba73-4c4df7efaba8", 00:20:38.455 "is_configured": false, 00:20:38.455 "data_offset": 0, 00:20:38.455 "data_size": 65536 00:20:38.455 }, 00:20:38.455 { 00:20:38.455 "name": "BaseBdev3", 00:20:38.455 "uuid": "b4af2455-2645-4ada-89a6-2f1ec4eab463", 00:20:38.455 "is_configured": true, 00:20:38.455 "data_offset": 0, 00:20:38.455 "data_size": 65536 00:20:38.455 } 00:20:38.455 ] 00:20:38.455 }' 00:20:38.455 00:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.456 00:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.023 00:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:39.023 00:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.281 00:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:39.281 00:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:39.540 [2024-07-25 00:47:02.015618] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:39.540 BaseBdev1 00:20:39.540 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:39.540 00:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:39.540 00:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:39.540 00:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:39.540 00:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:39.540 00:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:39.540 00:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:39.798 00:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:40.057 [ 00:20:40.057 { 00:20:40.057 "name": "BaseBdev1", 00:20:40.057 "aliases": [ 00:20:40.057 "4d967910-d673-40b2-aa4f-7986d55f7535" 00:20:40.057 ], 00:20:40.057 "product_name": "Malloc disk", 00:20:40.057 "block_size": 512, 00:20:40.057 "num_blocks": 65536, 00:20:40.057 "uuid": "4d967910-d673-40b2-aa4f-7986d55f7535", 00:20:40.057 "assigned_rate_limits": { 00:20:40.057 "rw_ios_per_sec": 0, 00:20:40.057 "rw_mbytes_per_sec": 0, 00:20:40.057 "r_mbytes_per_sec": 0, 00:20:40.057 "w_mbytes_per_sec": 0 00:20:40.057 }, 00:20:40.057 "claimed": true, 00:20:40.057 "claim_type": "exclusive_write", 00:20:40.057 "zoned": false, 00:20:40.057 "supported_io_types": { 00:20:40.057 "read": true, 00:20:40.057 "write": true, 00:20:40.057 "unmap": true, 00:20:40.057 "flush": true, 00:20:40.057 "reset": true, 00:20:40.057 "nvme_admin": false, 00:20:40.057 "nvme_io": false, 00:20:40.057 "nvme_io_md": false, 00:20:40.057 "write_zeroes": true, 00:20:40.057 "zcopy": true, 00:20:40.057 "get_zone_info": false, 00:20:40.057 "zone_management": false, 00:20:40.057 "zone_append": false, 00:20:40.057 "compare": false, 00:20:40.057 "compare_and_write": false, 00:20:40.057 "abort": true, 00:20:40.057 "seek_hole": false, 00:20:40.057 "seek_data": false, 00:20:40.057 "copy": true, 00:20:40.057 "nvme_iov_md": false 00:20:40.057 }, 00:20:40.057 "memory_domains": [ 00:20:40.057 { 00:20:40.057 "dma_device_id": "system", 00:20:40.057 "dma_device_type": 1 00:20:40.057 }, 00:20:40.057 { 00:20:40.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.057 "dma_device_type": 2 00:20:40.057 } 00:20:40.057 ], 00:20:40.057 "driver_specific": {} 00:20:40.057 } 00:20:40.057 ] 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.057 "name": "Existed_Raid", 00:20:40.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.057 "strip_size_kb": 64, 00:20:40.057 "state": "configuring", 00:20:40.057 "raid_level": "concat", 00:20:40.057 "superblock": false, 00:20:40.057 "num_base_bdevs": 3, 00:20:40.057 "num_base_bdevs_discovered": 2, 00:20:40.057 "num_base_bdevs_operational": 3, 00:20:40.057 "base_bdevs_list": [ 00:20:40.057 { 00:20:40.057 "name": "BaseBdev1", 00:20:40.057 "uuid": "4d967910-d673-40b2-aa4f-7986d55f7535", 00:20:40.057 "is_configured": true, 00:20:40.057 "data_offset": 0, 00:20:40.057 "data_size": 65536 00:20:40.057 }, 00:20:40.057 { 00:20:40.057 "name": null, 00:20:40.057 "uuid": "9022d05d-5ec5-4055-ba73-4c4df7efaba8", 00:20:40.057 "is_configured": false, 00:20:40.057 "data_offset": 0, 00:20:40.057 "data_size": 65536 00:20:40.057 }, 00:20:40.057 { 00:20:40.057 "name": "BaseBdev3", 00:20:40.057 "uuid": "b4af2455-2645-4ada-89a6-2f1ec4eab463", 00:20:40.057 "is_configured": true, 00:20:40.057 "data_offset": 0, 00:20:40.057 "data_size": 65536 00:20:40.057 } 00:20:40.057 ] 00:20:40.057 }' 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.057 00:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.993 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.993 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:40.993 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:40.993 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:41.252 [2024-07-25 00:47:03.723953] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.252 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.510 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.510 "name": "Existed_Raid", 00:20:41.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.510 "strip_size_kb": 64, 00:20:41.510 "state": "configuring", 00:20:41.510 "raid_level": "concat", 00:20:41.510 "superblock": false, 00:20:41.510 "num_base_bdevs": 3, 00:20:41.510 "num_base_bdevs_discovered": 1, 00:20:41.510 "num_base_bdevs_operational": 3, 00:20:41.510 "base_bdevs_list": [ 00:20:41.510 { 00:20:41.510 "name": "BaseBdev1", 00:20:41.510 "uuid": "4d967910-d673-40b2-aa4f-7986d55f7535", 00:20:41.510 "is_configured": true, 00:20:41.510 "data_offset": 0, 00:20:41.510 "data_size": 65536 00:20:41.510 }, 00:20:41.510 { 00:20:41.510 "name": null, 00:20:41.510 "uuid": "9022d05d-5ec5-4055-ba73-4c4df7efaba8", 00:20:41.510 "is_configured": false, 00:20:41.510 "data_offset": 0, 00:20:41.510 "data_size": 65536 00:20:41.510 }, 00:20:41.510 { 00:20:41.510 "name": null, 00:20:41.510 "uuid": "b4af2455-2645-4ada-89a6-2f1ec4eab463", 00:20:41.510 "is_configured": false, 00:20:41.510 "data_offset": 0, 00:20:41.510 "data_size": 65536 00:20:41.510 } 00:20:41.510 ] 00:20:41.510 }' 00:20:41.510 00:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.510 00:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.078 00:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.078 00:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:42.336 00:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:42.336 00:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:42.595 [2024-07-25 00:47:05.004196] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.595 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.854 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.854 "name": "Existed_Raid", 00:20:42.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.854 "strip_size_kb": 64, 00:20:42.854 "state": "configuring", 00:20:42.854 "raid_level": "concat", 00:20:42.854 "superblock": false, 00:20:42.854 "num_base_bdevs": 3, 00:20:42.854 "num_base_bdevs_discovered": 2, 00:20:42.854 "num_base_bdevs_operational": 3, 00:20:42.854 "base_bdevs_list": [ 00:20:42.854 { 00:20:42.854 "name": "BaseBdev1", 00:20:42.854 "uuid": "4d967910-d673-40b2-aa4f-7986d55f7535", 00:20:42.854 "is_configured": true, 00:20:42.854 "data_offset": 0, 00:20:42.854 "data_size": 65536 00:20:42.854 }, 00:20:42.854 { 00:20:42.854 "name": null, 00:20:42.854 "uuid": "9022d05d-5ec5-4055-ba73-4c4df7efaba8", 00:20:42.854 "is_configured": false, 00:20:42.854 "data_offset": 0, 00:20:42.854 "data_size": 65536 00:20:42.854 }, 00:20:42.854 { 00:20:42.854 "name": "BaseBdev3", 00:20:42.854 "uuid": "b4af2455-2645-4ada-89a6-2f1ec4eab463", 00:20:42.854 "is_configured": true, 00:20:42.854 "data_offset": 0, 00:20:42.854 "data_size": 65536 00:20:42.854 } 00:20:42.854 ] 00:20:42.854 }' 00:20:42.854 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.854 00:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.422 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.422 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:43.422 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:43.422 00:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:43.681 [2024-07-25 00:47:06.102740] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.681 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.940 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.940 "name": "Existed_Raid", 00:20:43.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.940 "strip_size_kb": 64, 00:20:43.940 "state": "configuring", 00:20:43.940 "raid_level": "concat", 00:20:43.940 "superblock": false, 00:20:43.940 "num_base_bdevs": 3, 00:20:43.940 "num_base_bdevs_discovered": 1, 00:20:43.940 "num_base_bdevs_operational": 3, 00:20:43.940 "base_bdevs_list": [ 00:20:43.940 { 00:20:43.940 "name": null, 00:20:43.940 "uuid": "4d967910-d673-40b2-aa4f-7986d55f7535", 00:20:43.940 "is_configured": false, 00:20:43.940 "data_offset": 0, 00:20:43.940 "data_size": 65536 00:20:43.940 }, 00:20:43.940 { 00:20:43.940 "name": null, 00:20:43.940 "uuid": "9022d05d-5ec5-4055-ba73-4c4df7efaba8", 00:20:43.940 "is_configured": false, 00:20:43.940 "data_offset": 0, 00:20:43.940 "data_size": 65536 00:20:43.940 }, 00:20:43.940 { 00:20:43.940 "name": "BaseBdev3", 00:20:43.940 "uuid": "b4af2455-2645-4ada-89a6-2f1ec4eab463", 00:20:43.940 "is_configured": true, 00:20:43.940 "data_offset": 0, 00:20:43.940 "data_size": 65536 00:20:43.940 } 00:20:43.940 ] 00:20:43.940 }' 00:20:43.940 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.940 00:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.507 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.507 00:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:44.507 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:44.507 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:44.766 [2024-07-25 00:47:07.275931] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:44.766 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:44.766 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:44.766 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:44.766 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:44.766 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:44.766 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:44.766 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:44.766 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:44.766 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:44.766 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:44.767 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.767 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.040 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.040 "name": "Existed_Raid", 00:20:45.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.040 "strip_size_kb": 64, 00:20:45.040 "state": "configuring", 00:20:45.040 "raid_level": "concat", 00:20:45.040 "superblock": false, 00:20:45.040 "num_base_bdevs": 3, 00:20:45.040 "num_base_bdevs_discovered": 2, 00:20:45.040 "num_base_bdevs_operational": 3, 00:20:45.040 "base_bdevs_list": [ 00:20:45.040 { 00:20:45.040 "name": null, 00:20:45.040 "uuid": "4d967910-d673-40b2-aa4f-7986d55f7535", 00:20:45.040 "is_configured": false, 00:20:45.040 "data_offset": 0, 00:20:45.040 "data_size": 65536 00:20:45.040 }, 00:20:45.040 { 00:20:45.040 "name": "BaseBdev2", 00:20:45.040 "uuid": "9022d05d-5ec5-4055-ba73-4c4df7efaba8", 00:20:45.040 "is_configured": true, 00:20:45.040 "data_offset": 0, 00:20:45.040 "data_size": 65536 00:20:45.040 }, 00:20:45.040 { 00:20:45.040 "name": "BaseBdev3", 00:20:45.040 "uuid": "b4af2455-2645-4ada-89a6-2f1ec4eab463", 00:20:45.040 "is_configured": true, 00:20:45.040 "data_offset": 0, 00:20:45.040 "data_size": 65536 00:20:45.040 } 00:20:45.040 ] 00:20:45.040 }' 00:20:45.040 00:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.040 00:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.667 00:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.667 00:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:45.953 00:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:45.953 00:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.953 00:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:45.953 00:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4d967910-d673-40b2-aa4f-7986d55f7535 00:20:46.212 [2024-07-25 00:47:08.786094] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:46.212 [2024-07-25 00:47:08.786334] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:20:46.212 [2024-07-25 00:47:08.786374] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:46.212 [2024-07-25 00:47:08.786962] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:46.212 [2024-07-25 00:47:08.788277] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:20:46.212 [2024-07-25 00:47:08.788582] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:20:46.212 [2024-07-25 00:47:08.789499] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.212 NewBaseBdev 00:20:46.212 00:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:46.212 00:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:46.212 00:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:46.212 00:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:46.212 00:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:46.212 00:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:46.212 00:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:46.471 00:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:46.730 [ 00:20:46.730 { 00:20:46.730 "name": "NewBaseBdev", 00:20:46.730 "aliases": [ 00:20:46.730 "4d967910-d673-40b2-aa4f-7986d55f7535" 00:20:46.730 ], 00:20:46.730 "product_name": "Malloc disk", 00:20:46.730 "block_size": 512, 00:20:46.730 "num_blocks": 65536, 00:20:46.730 "uuid": "4d967910-d673-40b2-aa4f-7986d55f7535", 00:20:46.730 "assigned_rate_limits": { 00:20:46.730 "rw_ios_per_sec": 0, 00:20:46.730 "rw_mbytes_per_sec": 0, 00:20:46.730 "r_mbytes_per_sec": 0, 00:20:46.730 "w_mbytes_per_sec": 0 00:20:46.730 }, 00:20:46.730 "claimed": true, 00:20:46.730 "claim_type": "exclusive_write", 00:20:46.730 "zoned": false, 00:20:46.730 "supported_io_types": { 00:20:46.730 "read": true, 00:20:46.730 "write": true, 00:20:46.730 "unmap": true, 00:20:46.730 "flush": true, 00:20:46.730 "reset": true, 00:20:46.730 "nvme_admin": false, 00:20:46.730 "nvme_io": false, 00:20:46.730 "nvme_io_md": false, 00:20:46.730 "write_zeroes": true, 00:20:46.730 "zcopy": true, 00:20:46.730 "get_zone_info": false, 00:20:46.730 "zone_management": false, 00:20:46.730 "zone_append": false, 00:20:46.730 "compare": false, 00:20:46.730 "compare_and_write": false, 00:20:46.730 "abort": true, 00:20:46.730 "seek_hole": false, 00:20:46.730 "seek_data": false, 00:20:46.730 "copy": true, 00:20:46.730 "nvme_iov_md": false 00:20:46.730 }, 00:20:46.730 "memory_domains": [ 00:20:46.730 { 00:20:46.730 "dma_device_id": "system", 00:20:46.730 "dma_device_type": 1 00:20:46.730 }, 00:20:46.730 { 00:20:46.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.730 "dma_device_type": 2 00:20:46.730 } 00:20:46.730 ], 00:20:46.730 "driver_specific": {} 00:20:46.730 } 00:20:46.730 ] 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.730 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.989 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.989 "name": "Existed_Raid", 00:20:46.989 "uuid": "4c55666c-0f58-45ef-8915-338a0b9b482c", 00:20:46.989 "strip_size_kb": 64, 00:20:46.989 "state": "online", 00:20:46.989 "raid_level": "concat", 00:20:46.989 "superblock": false, 00:20:46.989 "num_base_bdevs": 3, 00:20:46.989 "num_base_bdevs_discovered": 3, 00:20:46.989 "num_base_bdevs_operational": 3, 00:20:46.989 "base_bdevs_list": [ 00:20:46.989 { 00:20:46.989 "name": "NewBaseBdev", 00:20:46.989 "uuid": "4d967910-d673-40b2-aa4f-7986d55f7535", 00:20:46.989 "is_configured": true, 00:20:46.989 "data_offset": 0, 00:20:46.989 "data_size": 65536 00:20:46.989 }, 00:20:46.989 { 00:20:46.989 "name": "BaseBdev2", 00:20:46.989 "uuid": "9022d05d-5ec5-4055-ba73-4c4df7efaba8", 00:20:46.989 "is_configured": true, 00:20:46.989 "data_offset": 0, 00:20:46.989 "data_size": 65536 00:20:46.989 }, 00:20:46.989 { 00:20:46.989 "name": "BaseBdev3", 00:20:46.989 "uuid": "b4af2455-2645-4ada-89a6-2f1ec4eab463", 00:20:46.989 "is_configured": true, 00:20:46.989 "data_offset": 0, 00:20:46.989 "data_size": 65536 00:20:46.989 } 00:20:46.989 ] 00:20:46.990 }' 00:20:46.990 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.990 00:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.604 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:47.604 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:47.604 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:47.604 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:47.604 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:47.604 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:47.604 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:47.604 00:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:47.604 [2024-07-25 00:47:10.250784] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.863 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:47.863 "name": "Existed_Raid", 00:20:47.863 "aliases": [ 00:20:47.863 "4c55666c-0f58-45ef-8915-338a0b9b482c" 00:20:47.863 ], 00:20:47.863 "product_name": "Raid Volume", 00:20:47.863 "block_size": 512, 00:20:47.863 "num_blocks": 196608, 00:20:47.863 "uuid": "4c55666c-0f58-45ef-8915-338a0b9b482c", 00:20:47.863 "assigned_rate_limits": { 00:20:47.863 "rw_ios_per_sec": 0, 00:20:47.863 "rw_mbytes_per_sec": 0, 00:20:47.863 "r_mbytes_per_sec": 0, 00:20:47.863 "w_mbytes_per_sec": 0 00:20:47.863 }, 00:20:47.863 "claimed": false, 00:20:47.863 "zoned": false, 00:20:47.863 "supported_io_types": { 00:20:47.863 "read": true, 00:20:47.863 "write": true, 00:20:47.863 "unmap": true, 00:20:47.863 "flush": true, 00:20:47.863 "reset": true, 00:20:47.863 "nvme_admin": false, 00:20:47.863 "nvme_io": false, 00:20:47.863 "nvme_io_md": false, 00:20:47.863 "write_zeroes": true, 00:20:47.863 "zcopy": false, 00:20:47.863 "get_zone_info": false, 00:20:47.863 "zone_management": false, 00:20:47.863 "zone_append": false, 00:20:47.863 "compare": false, 00:20:47.863 "compare_and_write": false, 00:20:47.863 "abort": false, 00:20:47.863 "seek_hole": false, 00:20:47.863 "seek_data": false, 00:20:47.863 "copy": false, 00:20:47.863 "nvme_iov_md": false 00:20:47.863 }, 00:20:47.863 "memory_domains": [ 00:20:47.863 { 00:20:47.863 "dma_device_id": "system", 00:20:47.863 "dma_device_type": 1 00:20:47.863 }, 00:20:47.863 { 00:20:47.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.863 "dma_device_type": 2 00:20:47.863 }, 00:20:47.863 { 00:20:47.863 "dma_device_id": "system", 00:20:47.863 "dma_device_type": 1 00:20:47.863 }, 00:20:47.863 { 00:20:47.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.863 "dma_device_type": 2 00:20:47.863 }, 00:20:47.863 { 00:20:47.863 "dma_device_id": "system", 00:20:47.863 "dma_device_type": 1 00:20:47.863 }, 00:20:47.863 { 00:20:47.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.863 "dma_device_type": 2 00:20:47.863 } 00:20:47.863 ], 00:20:47.863 "driver_specific": { 00:20:47.863 "raid": { 00:20:47.863 "uuid": "4c55666c-0f58-45ef-8915-338a0b9b482c", 00:20:47.863 "strip_size_kb": 64, 00:20:47.863 "state": "online", 00:20:47.864 "raid_level": "concat", 00:20:47.864 "superblock": false, 00:20:47.864 "num_base_bdevs": 3, 00:20:47.864 "num_base_bdevs_discovered": 3, 00:20:47.864 "num_base_bdevs_operational": 3, 00:20:47.864 "base_bdevs_list": [ 00:20:47.864 { 00:20:47.864 "name": "NewBaseBdev", 00:20:47.864 "uuid": "4d967910-d673-40b2-aa4f-7986d55f7535", 00:20:47.864 "is_configured": true, 00:20:47.864 "data_offset": 0, 00:20:47.864 "data_size": 65536 00:20:47.864 }, 00:20:47.864 { 00:20:47.864 "name": "BaseBdev2", 00:20:47.864 "uuid": "9022d05d-5ec5-4055-ba73-4c4df7efaba8", 00:20:47.864 "is_configured": true, 00:20:47.864 "data_offset": 0, 00:20:47.864 "data_size": 65536 00:20:47.864 }, 00:20:47.864 { 00:20:47.864 "name": "BaseBdev3", 00:20:47.864 "uuid": "b4af2455-2645-4ada-89a6-2f1ec4eab463", 00:20:47.864 "is_configured": true, 00:20:47.864 "data_offset": 0, 00:20:47.864 "data_size": 65536 00:20:47.864 } 00:20:47.864 ] 00:20:47.864 } 00:20:47.864 } 00:20:47.864 }' 00:20:47.864 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:47.864 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:47.864 BaseBdev2 00:20:47.864 BaseBdev3' 00:20:47.864 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:47.864 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:47.864 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:48.122 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:48.122 "name": "NewBaseBdev", 00:20:48.122 "aliases": [ 00:20:48.122 "4d967910-d673-40b2-aa4f-7986d55f7535" 00:20:48.122 ], 00:20:48.122 "product_name": "Malloc disk", 00:20:48.122 "block_size": 512, 00:20:48.122 "num_blocks": 65536, 00:20:48.122 "uuid": "4d967910-d673-40b2-aa4f-7986d55f7535", 00:20:48.122 "assigned_rate_limits": { 00:20:48.122 "rw_ios_per_sec": 0, 00:20:48.122 "rw_mbytes_per_sec": 0, 00:20:48.122 "r_mbytes_per_sec": 0, 00:20:48.122 "w_mbytes_per_sec": 0 00:20:48.122 }, 00:20:48.122 "claimed": true, 00:20:48.122 "claim_type": "exclusive_write", 00:20:48.122 "zoned": false, 00:20:48.122 "supported_io_types": { 00:20:48.122 "read": true, 00:20:48.122 "write": true, 00:20:48.122 "unmap": true, 00:20:48.122 "flush": true, 00:20:48.122 "reset": true, 00:20:48.122 "nvme_admin": false, 00:20:48.122 "nvme_io": false, 00:20:48.122 "nvme_io_md": false, 00:20:48.122 "write_zeroes": true, 00:20:48.122 "zcopy": true, 00:20:48.122 "get_zone_info": false, 00:20:48.122 "zone_management": false, 00:20:48.122 "zone_append": false, 00:20:48.122 "compare": false, 00:20:48.122 "compare_and_write": false, 00:20:48.122 "abort": true, 00:20:48.122 "seek_hole": false, 00:20:48.122 "seek_data": false, 00:20:48.122 "copy": true, 00:20:48.122 "nvme_iov_md": false 00:20:48.122 }, 00:20:48.122 "memory_domains": [ 00:20:48.122 { 00:20:48.122 "dma_device_id": "system", 00:20:48.122 "dma_device_type": 1 00:20:48.122 }, 00:20:48.122 { 00:20:48.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.122 "dma_device_type": 2 00:20:48.122 } 00:20:48.122 ], 00:20:48.122 "driver_specific": {} 00:20:48.122 }' 00:20:48.122 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.122 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.122 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:48.122 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.122 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.122 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:48.122 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.383 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.383 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:48.383 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.383 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.383 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:48.383 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:48.383 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:48.383 00:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:48.645 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:48.645 "name": "BaseBdev2", 00:20:48.645 "aliases": [ 00:20:48.645 "9022d05d-5ec5-4055-ba73-4c4df7efaba8" 00:20:48.645 ], 00:20:48.645 "product_name": "Malloc disk", 00:20:48.645 "block_size": 512, 00:20:48.645 "num_blocks": 65536, 00:20:48.645 "uuid": "9022d05d-5ec5-4055-ba73-4c4df7efaba8", 00:20:48.645 "assigned_rate_limits": { 00:20:48.645 "rw_ios_per_sec": 0, 00:20:48.645 "rw_mbytes_per_sec": 0, 00:20:48.645 "r_mbytes_per_sec": 0, 00:20:48.645 "w_mbytes_per_sec": 0 00:20:48.645 }, 00:20:48.645 "claimed": true, 00:20:48.645 "claim_type": "exclusive_write", 00:20:48.645 "zoned": false, 00:20:48.645 "supported_io_types": { 00:20:48.645 "read": true, 00:20:48.645 "write": true, 00:20:48.645 "unmap": true, 00:20:48.645 "flush": true, 00:20:48.645 "reset": true, 00:20:48.645 "nvme_admin": false, 00:20:48.645 "nvme_io": false, 00:20:48.645 "nvme_io_md": false, 00:20:48.645 "write_zeroes": true, 00:20:48.645 "zcopy": true, 00:20:48.645 "get_zone_info": false, 00:20:48.645 "zone_management": false, 00:20:48.645 "zone_append": false, 00:20:48.645 "compare": false, 00:20:48.645 "compare_and_write": false, 00:20:48.645 "abort": true, 00:20:48.645 "seek_hole": false, 00:20:48.645 "seek_data": false, 00:20:48.645 "copy": true, 00:20:48.645 "nvme_iov_md": false 00:20:48.645 }, 00:20:48.645 "memory_domains": [ 00:20:48.645 { 00:20:48.645 "dma_device_id": "system", 00:20:48.645 "dma_device_type": 1 00:20:48.645 }, 00:20:48.645 { 00:20:48.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.645 "dma_device_type": 2 00:20:48.645 } 00:20:48.645 ], 00:20:48.645 "driver_specific": {} 00:20:48.645 }' 00:20:48.645 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.645 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.645 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:48.645 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.904 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.904 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:48.904 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.904 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.904 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:48.904 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.904 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.904 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:48.904 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:48.904 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:48.904 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:49.163 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:49.163 "name": "BaseBdev3", 00:20:49.163 "aliases": [ 00:20:49.163 "b4af2455-2645-4ada-89a6-2f1ec4eab463" 00:20:49.163 ], 00:20:49.163 "product_name": "Malloc disk", 00:20:49.163 "block_size": 512, 00:20:49.163 "num_blocks": 65536, 00:20:49.163 "uuid": "b4af2455-2645-4ada-89a6-2f1ec4eab463", 00:20:49.163 "assigned_rate_limits": { 00:20:49.163 "rw_ios_per_sec": 0, 00:20:49.163 "rw_mbytes_per_sec": 0, 00:20:49.163 "r_mbytes_per_sec": 0, 00:20:49.163 "w_mbytes_per_sec": 0 00:20:49.163 }, 00:20:49.163 "claimed": true, 00:20:49.163 "claim_type": "exclusive_write", 00:20:49.163 "zoned": false, 00:20:49.163 "supported_io_types": { 00:20:49.163 "read": true, 00:20:49.163 "write": true, 00:20:49.163 "unmap": true, 00:20:49.163 "flush": true, 00:20:49.163 "reset": true, 00:20:49.163 "nvme_admin": false, 00:20:49.163 "nvme_io": false, 00:20:49.163 "nvme_io_md": false, 00:20:49.163 "write_zeroes": true, 00:20:49.163 "zcopy": true, 00:20:49.163 "get_zone_info": false, 00:20:49.163 "zone_management": false, 00:20:49.163 "zone_append": false, 00:20:49.163 "compare": false, 00:20:49.163 "compare_and_write": false, 00:20:49.163 "abort": true, 00:20:49.163 "seek_hole": false, 00:20:49.163 "seek_data": false, 00:20:49.163 "copy": true, 00:20:49.163 "nvme_iov_md": false 00:20:49.163 }, 00:20:49.163 "memory_domains": [ 00:20:49.163 { 00:20:49.163 "dma_device_id": "system", 00:20:49.163 "dma_device_type": 1 00:20:49.163 }, 00:20:49.163 { 00:20:49.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:49.163 "dma_device_type": 2 00:20:49.163 } 00:20:49.163 ], 00:20:49.163 "driver_specific": {} 00:20:49.163 }' 00:20:49.163 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:49.163 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:49.163 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:49.164 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:49.164 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:49.423 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:49.423 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:49.423 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:49.423 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:49.423 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:49.423 00:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:49.423 00:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:49.423 00:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:49.683 [2024-07-25 00:47:12.174786] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:49.683 [2024-07-25 00:47:12.174933] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:49.683 [2024-07-25 00:47:12.175124] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:49.683 [2024-07-25 00:47:12.175214] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:49.683 [2024-07-25 00:47:12.175391] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:20:49.683 00:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 129076 00:20:49.683 00:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 129076 ']' 00:20:49.683 00:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 129076 00:20:49.683 00:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:20:49.683 00:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:49.683 00:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 129076 00:20:49.683 00:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:49.683 00:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:49.683 00:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 129076' 00:20:49.683 killing process with pid 129076 00:20:49.683 00:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 129076 00:20:49.683 [2024-07-25 00:47:12.218911] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:49.683 00:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 129076 00:20:49.942 [2024-07-25 00:47:12.518096] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:51.320 00:20:51.320 real 0m27.825s 00:20:51.320 user 0m50.072s 00:20:51.320 sys 0m4.081s 00:20:51.320 ************************************ 00:20:51.320 END TEST raid_state_function_test 00:20:51.320 ************************************ 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.320 00:47:13 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:20:51.320 00:47:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:51.320 00:47:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:51.320 00:47:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:51.320 ************************************ 00:20:51.320 START TEST raid_state_function_test_sb 00:20:51.320 ************************************ 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=130036 00:20:51.320 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 130036' 00:20:51.320 Process raid pid: 130036 00:20:51.321 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 130036 /var/tmp/spdk-raid.sock 00:20:51.321 00:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:51.321 00:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 130036 ']' 00:20:51.321 00:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:51.321 00:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:51.321 00:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:51.321 00:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.321 00:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.579 [2024-07-25 00:47:13.989529] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:20:51.579 [2024-07-25 00:47:13.989891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.579 [2024-07-25 00:47:14.149042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.838 [2024-07-25 00:47:14.344069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.097 [2024-07-25 00:47:14.549692] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:52.355 00:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.355 00:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:52.355 00:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:52.613 [2024-07-25 00:47:15.205114] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:52.613 [2024-07-25 00:47:15.205388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:52.613 [2024-07-25 00:47:15.205468] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:52.613 [2024-07-25 00:47:15.205524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:52.613 [2024-07-25 00:47:15.205551] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:52.613 [2024-07-25 00:47:15.205632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.613 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.871 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.871 "name": "Existed_Raid", 00:20:52.871 "uuid": "83454109-1b82-4f92-99ad-8a492e418e80", 00:20:52.871 "strip_size_kb": 64, 00:20:52.871 "state": "configuring", 00:20:52.871 "raid_level": "concat", 00:20:52.871 "superblock": true, 00:20:52.871 "num_base_bdevs": 3, 00:20:52.871 "num_base_bdevs_discovered": 0, 00:20:52.871 "num_base_bdevs_operational": 3, 00:20:52.871 "base_bdevs_list": [ 00:20:52.871 { 00:20:52.871 "name": "BaseBdev1", 00:20:52.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.871 "is_configured": false, 00:20:52.871 "data_offset": 0, 00:20:52.871 "data_size": 0 00:20:52.871 }, 00:20:52.871 { 00:20:52.871 "name": "BaseBdev2", 00:20:52.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.871 "is_configured": false, 00:20:52.871 "data_offset": 0, 00:20:52.871 "data_size": 0 00:20:52.871 }, 00:20:52.871 { 00:20:52.871 "name": "BaseBdev3", 00:20:52.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.871 "is_configured": false, 00:20:52.871 "data_offset": 0, 00:20:52.871 "data_size": 0 00:20:52.871 } 00:20:52.871 ] 00:20:52.871 }' 00:20:52.871 00:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.871 00:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.437 00:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:53.696 [2024-07-25 00:47:16.197240] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:53.696 [2024-07-25 00:47:16.197463] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:53.696 00:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:53.954 [2024-07-25 00:47:16.469268] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:53.954 [2024-07-25 00:47:16.469440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:53.954 [2024-07-25 00:47:16.469545] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:53.954 [2024-07-25 00:47:16.469595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:53.954 [2024-07-25 00:47:16.469779] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:53.954 [2024-07-25 00:47:16.469828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:53.954 00:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:54.213 [2024-07-25 00:47:16.674520] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.213 BaseBdev1 00:20:54.213 00:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:54.213 00:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:54.213 00:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:54.213 00:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:54.213 00:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:54.213 00:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:54.213 00:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.471 00:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:54.471 [ 00:20:54.471 { 00:20:54.471 "name": "BaseBdev1", 00:20:54.471 "aliases": [ 00:20:54.471 "9940c392-bd03-45fb-b180-37f689aecf5e" 00:20:54.471 ], 00:20:54.471 "product_name": "Malloc disk", 00:20:54.471 "block_size": 512, 00:20:54.471 "num_blocks": 65536, 00:20:54.471 "uuid": "9940c392-bd03-45fb-b180-37f689aecf5e", 00:20:54.471 "assigned_rate_limits": { 00:20:54.471 "rw_ios_per_sec": 0, 00:20:54.472 "rw_mbytes_per_sec": 0, 00:20:54.472 "r_mbytes_per_sec": 0, 00:20:54.472 "w_mbytes_per_sec": 0 00:20:54.472 }, 00:20:54.472 "claimed": true, 00:20:54.472 "claim_type": "exclusive_write", 00:20:54.472 "zoned": false, 00:20:54.472 "supported_io_types": { 00:20:54.472 "read": true, 00:20:54.472 "write": true, 00:20:54.472 "unmap": true, 00:20:54.472 "flush": true, 00:20:54.472 "reset": true, 00:20:54.472 "nvme_admin": false, 00:20:54.472 "nvme_io": false, 00:20:54.472 "nvme_io_md": false, 00:20:54.472 "write_zeroes": true, 00:20:54.472 "zcopy": true, 00:20:54.472 "get_zone_info": false, 00:20:54.472 "zone_management": false, 00:20:54.472 "zone_append": false, 00:20:54.472 "compare": false, 00:20:54.472 "compare_and_write": false, 00:20:54.472 "abort": true, 00:20:54.472 "seek_hole": false, 00:20:54.472 "seek_data": false, 00:20:54.472 "copy": true, 00:20:54.472 "nvme_iov_md": false 00:20:54.472 }, 00:20:54.472 "memory_domains": [ 00:20:54.472 { 00:20:54.472 "dma_device_id": "system", 00:20:54.472 "dma_device_type": 1 00:20:54.472 }, 00:20:54.472 { 00:20:54.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.472 "dma_device_type": 2 00:20:54.472 } 00:20:54.472 ], 00:20:54.472 "driver_specific": {} 00:20:54.472 } 00:20:54.472 ] 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.472 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.731 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.731 "name": "Existed_Raid", 00:20:54.731 "uuid": "9560d9fd-17a3-47ee-82a4-17a84db7b73d", 00:20:54.731 "strip_size_kb": 64, 00:20:54.731 "state": "configuring", 00:20:54.731 "raid_level": "concat", 00:20:54.731 "superblock": true, 00:20:54.731 "num_base_bdevs": 3, 00:20:54.731 "num_base_bdevs_discovered": 1, 00:20:54.731 "num_base_bdevs_operational": 3, 00:20:54.731 "base_bdevs_list": [ 00:20:54.731 { 00:20:54.731 "name": "BaseBdev1", 00:20:54.731 "uuid": "9940c392-bd03-45fb-b180-37f689aecf5e", 00:20:54.731 "is_configured": true, 00:20:54.731 "data_offset": 2048, 00:20:54.731 "data_size": 63488 00:20:54.731 }, 00:20:54.731 { 00:20:54.731 "name": "BaseBdev2", 00:20:54.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.731 "is_configured": false, 00:20:54.731 "data_offset": 0, 00:20:54.731 "data_size": 0 00:20:54.731 }, 00:20:54.731 { 00:20:54.731 "name": "BaseBdev3", 00:20:54.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.731 "is_configured": false, 00:20:54.731 "data_offset": 0, 00:20:54.731 "data_size": 0 00:20:54.731 } 00:20:54.731 ] 00:20:54.731 }' 00:20:54.731 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.731 00:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.297 00:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:55.556 [2024-07-25 00:47:18.015028] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:55.556 [2024-07-25 00:47:18.015223] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:55.556 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:55.815 [2024-07-25 00:47:18.287113] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:55.815 [2024-07-25 00:47:18.289078] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:55.815 [2024-07-25 00:47:18.289245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:55.815 [2024-07-25 00:47:18.289333] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:55.815 [2024-07-25 00:47:18.289403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.815 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.074 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:56.074 "name": "Existed_Raid", 00:20:56.074 "uuid": "47490283-d563-407a-a37d-69885ddca766", 00:20:56.074 "strip_size_kb": 64, 00:20:56.074 "state": "configuring", 00:20:56.074 "raid_level": "concat", 00:20:56.074 "superblock": true, 00:20:56.074 "num_base_bdevs": 3, 00:20:56.074 "num_base_bdevs_discovered": 1, 00:20:56.074 "num_base_bdevs_operational": 3, 00:20:56.074 "base_bdevs_list": [ 00:20:56.074 { 00:20:56.074 "name": "BaseBdev1", 00:20:56.074 "uuid": "9940c392-bd03-45fb-b180-37f689aecf5e", 00:20:56.074 "is_configured": true, 00:20:56.074 "data_offset": 2048, 00:20:56.074 "data_size": 63488 00:20:56.074 }, 00:20:56.074 { 00:20:56.074 "name": "BaseBdev2", 00:20:56.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.074 "is_configured": false, 00:20:56.074 "data_offset": 0, 00:20:56.074 "data_size": 0 00:20:56.074 }, 00:20:56.074 { 00:20:56.074 "name": "BaseBdev3", 00:20:56.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.075 "is_configured": false, 00:20:56.075 "data_offset": 0, 00:20:56.075 "data_size": 0 00:20:56.075 } 00:20:56.075 ] 00:20:56.075 }' 00:20:56.075 00:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:56.075 00:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.643 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:56.902 [2024-07-25 00:47:19.392921] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:56.902 BaseBdev2 00:20:56.902 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:56.902 00:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:56.902 00:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:56.902 00:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:56.902 00:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:56.902 00:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:56.902 00:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:57.161 00:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:57.420 [ 00:20:57.420 { 00:20:57.420 "name": "BaseBdev2", 00:20:57.420 "aliases": [ 00:20:57.420 "e64b6000-5c75-4f35-8833-482f38556b9e" 00:20:57.420 ], 00:20:57.420 "product_name": "Malloc disk", 00:20:57.420 "block_size": 512, 00:20:57.420 "num_blocks": 65536, 00:20:57.420 "uuid": "e64b6000-5c75-4f35-8833-482f38556b9e", 00:20:57.420 "assigned_rate_limits": { 00:20:57.420 "rw_ios_per_sec": 0, 00:20:57.420 "rw_mbytes_per_sec": 0, 00:20:57.420 "r_mbytes_per_sec": 0, 00:20:57.420 "w_mbytes_per_sec": 0 00:20:57.421 }, 00:20:57.421 "claimed": true, 00:20:57.421 "claim_type": "exclusive_write", 00:20:57.421 "zoned": false, 00:20:57.421 "supported_io_types": { 00:20:57.421 "read": true, 00:20:57.421 "write": true, 00:20:57.421 "unmap": true, 00:20:57.421 "flush": true, 00:20:57.421 "reset": true, 00:20:57.421 "nvme_admin": false, 00:20:57.421 "nvme_io": false, 00:20:57.421 "nvme_io_md": false, 00:20:57.421 "write_zeroes": true, 00:20:57.421 "zcopy": true, 00:20:57.421 "get_zone_info": false, 00:20:57.421 "zone_management": false, 00:20:57.421 "zone_append": false, 00:20:57.421 "compare": false, 00:20:57.421 "compare_and_write": false, 00:20:57.421 "abort": true, 00:20:57.421 "seek_hole": false, 00:20:57.421 "seek_data": false, 00:20:57.421 "copy": true, 00:20:57.421 "nvme_iov_md": false 00:20:57.421 }, 00:20:57.421 "memory_domains": [ 00:20:57.421 { 00:20:57.421 "dma_device_id": "system", 00:20:57.421 "dma_device_type": 1 00:20:57.421 }, 00:20:57.421 { 00:20:57.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.421 "dma_device_type": 2 00:20:57.421 } 00:20:57.421 ], 00:20:57.421 "driver_specific": {} 00:20:57.421 } 00:20:57.421 ] 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.421 00:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.680 00:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:57.680 "name": "Existed_Raid", 00:20:57.680 "uuid": "47490283-d563-407a-a37d-69885ddca766", 00:20:57.680 "strip_size_kb": 64, 00:20:57.680 "state": "configuring", 00:20:57.680 "raid_level": "concat", 00:20:57.680 "superblock": true, 00:20:57.680 "num_base_bdevs": 3, 00:20:57.680 "num_base_bdevs_discovered": 2, 00:20:57.680 "num_base_bdevs_operational": 3, 00:20:57.680 "base_bdevs_list": [ 00:20:57.680 { 00:20:57.680 "name": "BaseBdev1", 00:20:57.680 "uuid": "9940c392-bd03-45fb-b180-37f689aecf5e", 00:20:57.680 "is_configured": true, 00:20:57.680 "data_offset": 2048, 00:20:57.680 "data_size": 63488 00:20:57.680 }, 00:20:57.680 { 00:20:57.680 "name": "BaseBdev2", 00:20:57.680 "uuid": "e64b6000-5c75-4f35-8833-482f38556b9e", 00:20:57.680 "is_configured": true, 00:20:57.680 "data_offset": 2048, 00:20:57.680 "data_size": 63488 00:20:57.680 }, 00:20:57.680 { 00:20:57.680 "name": "BaseBdev3", 00:20:57.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.680 "is_configured": false, 00:20:57.680 "data_offset": 0, 00:20:57.680 "data_size": 0 00:20:57.680 } 00:20:57.680 ] 00:20:57.680 }' 00:20:57.680 00:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:57.680 00:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.249 00:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:58.508 [2024-07-25 00:47:21.081447] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:58.508 [2024-07-25 00:47:21.081873] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:20:58.508 [2024-07-25 00:47:21.082031] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:58.508 [2024-07-25 00:47:21.082178] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:58.508 [2024-07-25 00:47:21.082606] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:20:58.508 [2024-07-25 00:47:21.082649] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:20:58.508 [2024-07-25 00:47:21.082904] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.508 BaseBdev3 00:20:58.508 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:58.508 00:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:58.508 00:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:58.508 00:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:58.508 00:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:58.508 00:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:58.508 00:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:58.767 00:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:59.026 [ 00:20:59.026 { 00:20:59.026 "name": "BaseBdev3", 00:20:59.026 "aliases": [ 00:20:59.026 "59805cde-a5ec-47cc-b303-1a0072e187ba" 00:20:59.026 ], 00:20:59.026 "product_name": "Malloc disk", 00:20:59.026 "block_size": 512, 00:20:59.026 "num_blocks": 65536, 00:20:59.026 "uuid": "59805cde-a5ec-47cc-b303-1a0072e187ba", 00:20:59.026 "assigned_rate_limits": { 00:20:59.026 "rw_ios_per_sec": 0, 00:20:59.026 "rw_mbytes_per_sec": 0, 00:20:59.026 "r_mbytes_per_sec": 0, 00:20:59.026 "w_mbytes_per_sec": 0 00:20:59.026 }, 00:20:59.026 "claimed": true, 00:20:59.026 "claim_type": "exclusive_write", 00:20:59.026 "zoned": false, 00:20:59.026 "supported_io_types": { 00:20:59.026 "read": true, 00:20:59.026 "write": true, 00:20:59.026 "unmap": true, 00:20:59.026 "flush": true, 00:20:59.026 "reset": true, 00:20:59.026 "nvme_admin": false, 00:20:59.026 "nvme_io": false, 00:20:59.026 "nvme_io_md": false, 00:20:59.026 "write_zeroes": true, 00:20:59.026 "zcopy": true, 00:20:59.026 "get_zone_info": false, 00:20:59.026 "zone_management": false, 00:20:59.026 "zone_append": false, 00:20:59.026 "compare": false, 00:20:59.026 "compare_and_write": false, 00:20:59.026 "abort": true, 00:20:59.026 "seek_hole": false, 00:20:59.026 "seek_data": false, 00:20:59.026 "copy": true, 00:20:59.026 "nvme_iov_md": false 00:20:59.026 }, 00:20:59.026 "memory_domains": [ 00:20:59.026 { 00:20:59.026 "dma_device_id": "system", 00:20:59.026 "dma_device_type": 1 00:20:59.026 }, 00:20:59.026 { 00:20:59.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.026 "dma_device_type": 2 00:20:59.026 } 00:20:59.026 ], 00:20:59.026 "driver_specific": {} 00:20:59.026 } 00:20:59.026 ] 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.026 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.285 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:59.285 "name": "Existed_Raid", 00:20:59.285 "uuid": "47490283-d563-407a-a37d-69885ddca766", 00:20:59.285 "strip_size_kb": 64, 00:20:59.285 "state": "online", 00:20:59.285 "raid_level": "concat", 00:20:59.285 "superblock": true, 00:20:59.285 "num_base_bdevs": 3, 00:20:59.285 "num_base_bdevs_discovered": 3, 00:20:59.285 "num_base_bdevs_operational": 3, 00:20:59.285 "base_bdevs_list": [ 00:20:59.285 { 00:20:59.285 "name": "BaseBdev1", 00:20:59.285 "uuid": "9940c392-bd03-45fb-b180-37f689aecf5e", 00:20:59.285 "is_configured": true, 00:20:59.285 "data_offset": 2048, 00:20:59.285 "data_size": 63488 00:20:59.285 }, 00:20:59.285 { 00:20:59.285 "name": "BaseBdev2", 00:20:59.285 "uuid": "e64b6000-5c75-4f35-8833-482f38556b9e", 00:20:59.285 "is_configured": true, 00:20:59.285 "data_offset": 2048, 00:20:59.285 "data_size": 63488 00:20:59.285 }, 00:20:59.285 { 00:20:59.285 "name": "BaseBdev3", 00:20:59.285 "uuid": "59805cde-a5ec-47cc-b303-1a0072e187ba", 00:20:59.285 "is_configured": true, 00:20:59.285 "data_offset": 2048, 00:20:59.285 "data_size": 63488 00:20:59.285 } 00:20:59.285 ] 00:20:59.285 }' 00:20:59.285 00:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:59.285 00:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.853 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:59.853 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:59.853 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:59.853 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:59.853 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:59.853 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:59.853 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:59.853 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:00.112 [2024-07-25 00:47:22.630082] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:00.112 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:00.112 "name": "Existed_Raid", 00:21:00.112 "aliases": [ 00:21:00.112 "47490283-d563-407a-a37d-69885ddca766" 00:21:00.112 ], 00:21:00.112 "product_name": "Raid Volume", 00:21:00.112 "block_size": 512, 00:21:00.112 "num_blocks": 190464, 00:21:00.112 "uuid": "47490283-d563-407a-a37d-69885ddca766", 00:21:00.112 "assigned_rate_limits": { 00:21:00.112 "rw_ios_per_sec": 0, 00:21:00.112 "rw_mbytes_per_sec": 0, 00:21:00.112 "r_mbytes_per_sec": 0, 00:21:00.112 "w_mbytes_per_sec": 0 00:21:00.112 }, 00:21:00.112 "claimed": false, 00:21:00.112 "zoned": false, 00:21:00.112 "supported_io_types": { 00:21:00.112 "read": true, 00:21:00.112 "write": true, 00:21:00.112 "unmap": true, 00:21:00.112 "flush": true, 00:21:00.112 "reset": true, 00:21:00.112 "nvme_admin": false, 00:21:00.112 "nvme_io": false, 00:21:00.112 "nvme_io_md": false, 00:21:00.112 "write_zeroes": true, 00:21:00.112 "zcopy": false, 00:21:00.112 "get_zone_info": false, 00:21:00.112 "zone_management": false, 00:21:00.112 "zone_append": false, 00:21:00.112 "compare": false, 00:21:00.112 "compare_and_write": false, 00:21:00.112 "abort": false, 00:21:00.112 "seek_hole": false, 00:21:00.112 "seek_data": false, 00:21:00.112 "copy": false, 00:21:00.112 "nvme_iov_md": false 00:21:00.112 }, 00:21:00.112 "memory_domains": [ 00:21:00.112 { 00:21:00.112 "dma_device_id": "system", 00:21:00.112 "dma_device_type": 1 00:21:00.112 }, 00:21:00.112 { 00:21:00.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.112 "dma_device_type": 2 00:21:00.112 }, 00:21:00.112 { 00:21:00.112 "dma_device_id": "system", 00:21:00.112 "dma_device_type": 1 00:21:00.112 }, 00:21:00.112 { 00:21:00.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.112 "dma_device_type": 2 00:21:00.112 }, 00:21:00.112 { 00:21:00.112 "dma_device_id": "system", 00:21:00.112 "dma_device_type": 1 00:21:00.112 }, 00:21:00.112 { 00:21:00.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.112 "dma_device_type": 2 00:21:00.112 } 00:21:00.112 ], 00:21:00.112 "driver_specific": { 00:21:00.112 "raid": { 00:21:00.112 "uuid": "47490283-d563-407a-a37d-69885ddca766", 00:21:00.112 "strip_size_kb": 64, 00:21:00.112 "state": "online", 00:21:00.112 "raid_level": "concat", 00:21:00.112 "superblock": true, 00:21:00.112 "num_base_bdevs": 3, 00:21:00.112 "num_base_bdevs_discovered": 3, 00:21:00.112 "num_base_bdevs_operational": 3, 00:21:00.112 "base_bdevs_list": [ 00:21:00.112 { 00:21:00.112 "name": "BaseBdev1", 00:21:00.112 "uuid": "9940c392-bd03-45fb-b180-37f689aecf5e", 00:21:00.112 "is_configured": true, 00:21:00.112 "data_offset": 2048, 00:21:00.112 "data_size": 63488 00:21:00.112 }, 00:21:00.112 { 00:21:00.112 "name": "BaseBdev2", 00:21:00.112 "uuid": "e64b6000-5c75-4f35-8833-482f38556b9e", 00:21:00.112 "is_configured": true, 00:21:00.112 "data_offset": 2048, 00:21:00.112 "data_size": 63488 00:21:00.112 }, 00:21:00.112 { 00:21:00.112 "name": "BaseBdev3", 00:21:00.112 "uuid": "59805cde-a5ec-47cc-b303-1a0072e187ba", 00:21:00.112 "is_configured": true, 00:21:00.112 "data_offset": 2048, 00:21:00.112 "data_size": 63488 00:21:00.112 } 00:21:00.112 ] 00:21:00.112 } 00:21:00.112 } 00:21:00.112 }' 00:21:00.112 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:00.112 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:00.112 BaseBdev2 00:21:00.112 BaseBdev3' 00:21:00.112 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:00.112 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:00.112 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:00.371 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:00.371 "name": "BaseBdev1", 00:21:00.371 "aliases": [ 00:21:00.371 "9940c392-bd03-45fb-b180-37f689aecf5e" 00:21:00.371 ], 00:21:00.371 "product_name": "Malloc disk", 00:21:00.371 "block_size": 512, 00:21:00.371 "num_blocks": 65536, 00:21:00.371 "uuid": "9940c392-bd03-45fb-b180-37f689aecf5e", 00:21:00.371 "assigned_rate_limits": { 00:21:00.371 "rw_ios_per_sec": 0, 00:21:00.371 "rw_mbytes_per_sec": 0, 00:21:00.371 "r_mbytes_per_sec": 0, 00:21:00.371 "w_mbytes_per_sec": 0 00:21:00.371 }, 00:21:00.371 "claimed": true, 00:21:00.371 "claim_type": "exclusive_write", 00:21:00.371 "zoned": false, 00:21:00.371 "supported_io_types": { 00:21:00.371 "read": true, 00:21:00.371 "write": true, 00:21:00.371 "unmap": true, 00:21:00.371 "flush": true, 00:21:00.371 "reset": true, 00:21:00.371 "nvme_admin": false, 00:21:00.371 "nvme_io": false, 00:21:00.371 "nvme_io_md": false, 00:21:00.371 "write_zeroes": true, 00:21:00.371 "zcopy": true, 00:21:00.371 "get_zone_info": false, 00:21:00.371 "zone_management": false, 00:21:00.371 "zone_append": false, 00:21:00.371 "compare": false, 00:21:00.371 "compare_and_write": false, 00:21:00.371 "abort": true, 00:21:00.371 "seek_hole": false, 00:21:00.371 "seek_data": false, 00:21:00.371 "copy": true, 00:21:00.371 "nvme_iov_md": false 00:21:00.371 }, 00:21:00.371 "memory_domains": [ 00:21:00.371 { 00:21:00.371 "dma_device_id": "system", 00:21:00.371 "dma_device_type": 1 00:21:00.371 }, 00:21:00.371 { 00:21:00.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.371 "dma_device_type": 2 00:21:00.371 } 00:21:00.371 ], 00:21:00.371 "driver_specific": {} 00:21:00.371 }' 00:21:00.371 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:00.371 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:00.371 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:00.371 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:00.371 00:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:00.630 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:00.630 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:00.630 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:00.630 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:00.630 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:00.630 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:00.630 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:00.630 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:00.630 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:00.630 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:00.888 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:00.888 "name": "BaseBdev2", 00:21:00.888 "aliases": [ 00:21:00.888 "e64b6000-5c75-4f35-8833-482f38556b9e" 00:21:00.888 ], 00:21:00.888 "product_name": "Malloc disk", 00:21:00.888 "block_size": 512, 00:21:00.888 "num_blocks": 65536, 00:21:00.888 "uuid": "e64b6000-5c75-4f35-8833-482f38556b9e", 00:21:00.888 "assigned_rate_limits": { 00:21:00.888 "rw_ios_per_sec": 0, 00:21:00.888 "rw_mbytes_per_sec": 0, 00:21:00.888 "r_mbytes_per_sec": 0, 00:21:00.888 "w_mbytes_per_sec": 0 00:21:00.888 }, 00:21:00.888 "claimed": true, 00:21:00.888 "claim_type": "exclusive_write", 00:21:00.888 "zoned": false, 00:21:00.888 "supported_io_types": { 00:21:00.888 "read": true, 00:21:00.888 "write": true, 00:21:00.888 "unmap": true, 00:21:00.888 "flush": true, 00:21:00.888 "reset": true, 00:21:00.888 "nvme_admin": false, 00:21:00.888 "nvme_io": false, 00:21:00.888 "nvme_io_md": false, 00:21:00.888 "write_zeroes": true, 00:21:00.888 "zcopy": true, 00:21:00.888 "get_zone_info": false, 00:21:00.888 "zone_management": false, 00:21:00.888 "zone_append": false, 00:21:00.888 "compare": false, 00:21:00.888 "compare_and_write": false, 00:21:00.888 "abort": true, 00:21:00.888 "seek_hole": false, 00:21:00.888 "seek_data": false, 00:21:00.888 "copy": true, 00:21:00.888 "nvme_iov_md": false 00:21:00.888 }, 00:21:00.888 "memory_domains": [ 00:21:00.888 { 00:21:00.888 "dma_device_id": "system", 00:21:00.888 "dma_device_type": 1 00:21:00.888 }, 00:21:00.888 { 00:21:00.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.888 "dma_device_type": 2 00:21:00.888 } 00:21:00.888 ], 00:21:00.888 "driver_specific": {} 00:21:00.888 }' 00:21:00.888 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:01.147 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:01.147 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:01.147 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:01.147 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:01.147 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:01.147 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:01.147 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:01.147 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:01.147 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:01.406 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:01.406 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:01.406 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:01.406 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:01.406 00:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:01.666 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:01.666 "name": "BaseBdev3", 00:21:01.666 "aliases": [ 00:21:01.666 "59805cde-a5ec-47cc-b303-1a0072e187ba" 00:21:01.666 ], 00:21:01.666 "product_name": "Malloc disk", 00:21:01.666 "block_size": 512, 00:21:01.666 "num_blocks": 65536, 00:21:01.666 "uuid": "59805cde-a5ec-47cc-b303-1a0072e187ba", 00:21:01.666 "assigned_rate_limits": { 00:21:01.666 "rw_ios_per_sec": 0, 00:21:01.666 "rw_mbytes_per_sec": 0, 00:21:01.666 "r_mbytes_per_sec": 0, 00:21:01.666 "w_mbytes_per_sec": 0 00:21:01.666 }, 00:21:01.666 "claimed": true, 00:21:01.666 "claim_type": "exclusive_write", 00:21:01.666 "zoned": false, 00:21:01.666 "supported_io_types": { 00:21:01.666 "read": true, 00:21:01.666 "write": true, 00:21:01.666 "unmap": true, 00:21:01.666 "flush": true, 00:21:01.666 "reset": true, 00:21:01.666 "nvme_admin": false, 00:21:01.666 "nvme_io": false, 00:21:01.666 "nvme_io_md": false, 00:21:01.666 "write_zeroes": true, 00:21:01.666 "zcopy": true, 00:21:01.666 "get_zone_info": false, 00:21:01.666 "zone_management": false, 00:21:01.666 "zone_append": false, 00:21:01.666 "compare": false, 00:21:01.666 "compare_and_write": false, 00:21:01.666 "abort": true, 00:21:01.666 "seek_hole": false, 00:21:01.666 "seek_data": false, 00:21:01.666 "copy": true, 00:21:01.666 "nvme_iov_md": false 00:21:01.666 }, 00:21:01.666 "memory_domains": [ 00:21:01.666 { 00:21:01.666 "dma_device_id": "system", 00:21:01.666 "dma_device_type": 1 00:21:01.666 }, 00:21:01.666 { 00:21:01.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.666 "dma_device_type": 2 00:21:01.666 } 00:21:01.666 ], 00:21:01.666 "driver_specific": {} 00:21:01.666 }' 00:21:01.666 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:01.666 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:01.666 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:01.666 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:01.666 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:01.666 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:01.666 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:01.925 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:01.925 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:01.925 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:01.925 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:01.925 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:01.925 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:02.184 [2024-07-25 00:47:24.726174] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:02.184 [2024-07-25 00:47:24.726379] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.184 [2024-07-25 00:47:24.726555] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.506 00:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.506 00:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.507 "name": "Existed_Raid", 00:21:02.507 "uuid": "47490283-d563-407a-a37d-69885ddca766", 00:21:02.507 "strip_size_kb": 64, 00:21:02.507 "state": "offline", 00:21:02.507 "raid_level": "concat", 00:21:02.507 "superblock": true, 00:21:02.507 "num_base_bdevs": 3, 00:21:02.507 "num_base_bdevs_discovered": 2, 00:21:02.507 "num_base_bdevs_operational": 2, 00:21:02.507 "base_bdevs_list": [ 00:21:02.507 { 00:21:02.507 "name": null, 00:21:02.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.507 "is_configured": false, 00:21:02.507 "data_offset": 2048, 00:21:02.507 "data_size": 63488 00:21:02.507 }, 00:21:02.507 { 00:21:02.507 "name": "BaseBdev2", 00:21:02.507 "uuid": "e64b6000-5c75-4f35-8833-482f38556b9e", 00:21:02.507 "is_configured": true, 00:21:02.507 "data_offset": 2048, 00:21:02.507 "data_size": 63488 00:21:02.507 }, 00:21:02.507 { 00:21:02.507 "name": "BaseBdev3", 00:21:02.507 "uuid": "59805cde-a5ec-47cc-b303-1a0072e187ba", 00:21:02.507 "is_configured": true, 00:21:02.507 "data_offset": 2048, 00:21:02.507 "data_size": 63488 00:21:02.507 } 00:21:02.507 ] 00:21:02.507 }' 00:21:02.507 00:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.507 00:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.075 00:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:03.075 00:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:03.075 00:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:03.075 00:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.334 00:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:03.334 00:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:03.334 00:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:03.594 [2024-07-25 00:47:26.101807] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:03.594 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:03.594 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:03.594 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:03.594 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.853 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:03.853 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:03.853 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:04.112 [2024-07-25 00:47:26.662838] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:04.112 [2024-07-25 00:47:26.663018] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:21:04.371 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:04.371 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:04.371 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.371 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:04.371 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:04.371 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:04.371 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:21:04.371 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:04.371 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:04.371 00:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:04.630 BaseBdev2 00:21:04.630 00:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:04.630 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:04.630 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:04.630 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:04.630 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:04.630 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:04.630 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.890 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:05.149 [ 00:21:05.149 { 00:21:05.149 "name": "BaseBdev2", 00:21:05.149 "aliases": [ 00:21:05.149 "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e" 00:21:05.149 ], 00:21:05.149 "product_name": "Malloc disk", 00:21:05.149 "block_size": 512, 00:21:05.149 "num_blocks": 65536, 00:21:05.149 "uuid": "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e", 00:21:05.149 "assigned_rate_limits": { 00:21:05.149 "rw_ios_per_sec": 0, 00:21:05.149 "rw_mbytes_per_sec": 0, 00:21:05.149 "r_mbytes_per_sec": 0, 00:21:05.149 "w_mbytes_per_sec": 0 00:21:05.149 }, 00:21:05.149 "claimed": false, 00:21:05.149 "zoned": false, 00:21:05.149 "supported_io_types": { 00:21:05.149 "read": true, 00:21:05.149 "write": true, 00:21:05.149 "unmap": true, 00:21:05.149 "flush": true, 00:21:05.149 "reset": true, 00:21:05.149 "nvme_admin": false, 00:21:05.149 "nvme_io": false, 00:21:05.149 "nvme_io_md": false, 00:21:05.149 "write_zeroes": true, 00:21:05.149 "zcopy": true, 00:21:05.149 "get_zone_info": false, 00:21:05.149 "zone_management": false, 00:21:05.149 "zone_append": false, 00:21:05.149 "compare": false, 00:21:05.149 "compare_and_write": false, 00:21:05.149 "abort": true, 00:21:05.149 "seek_hole": false, 00:21:05.149 "seek_data": false, 00:21:05.149 "copy": true, 00:21:05.149 "nvme_iov_md": false 00:21:05.149 }, 00:21:05.149 "memory_domains": [ 00:21:05.149 { 00:21:05.149 "dma_device_id": "system", 00:21:05.149 "dma_device_type": 1 00:21:05.149 }, 00:21:05.149 { 00:21:05.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.149 "dma_device_type": 2 00:21:05.149 } 00:21:05.149 ], 00:21:05.149 "driver_specific": {} 00:21:05.149 } 00:21:05.149 ] 00:21:05.149 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:05.149 00:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:05.149 00:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:05.149 00:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:05.149 BaseBdev3 00:21:05.408 00:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:05.408 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:05.408 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:05.408 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:05.408 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:05.408 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:05.408 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:05.408 00:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:05.667 [ 00:21:05.667 { 00:21:05.667 "name": "BaseBdev3", 00:21:05.667 "aliases": [ 00:21:05.667 "e850fac7-0799-45eb-8505-79b8ebccb43c" 00:21:05.667 ], 00:21:05.667 "product_name": "Malloc disk", 00:21:05.668 "block_size": 512, 00:21:05.668 "num_blocks": 65536, 00:21:05.668 "uuid": "e850fac7-0799-45eb-8505-79b8ebccb43c", 00:21:05.668 "assigned_rate_limits": { 00:21:05.668 "rw_ios_per_sec": 0, 00:21:05.668 "rw_mbytes_per_sec": 0, 00:21:05.668 "r_mbytes_per_sec": 0, 00:21:05.668 "w_mbytes_per_sec": 0 00:21:05.668 }, 00:21:05.668 "claimed": false, 00:21:05.668 "zoned": false, 00:21:05.668 "supported_io_types": { 00:21:05.668 "read": true, 00:21:05.668 "write": true, 00:21:05.668 "unmap": true, 00:21:05.668 "flush": true, 00:21:05.668 "reset": true, 00:21:05.668 "nvme_admin": false, 00:21:05.668 "nvme_io": false, 00:21:05.668 "nvme_io_md": false, 00:21:05.668 "write_zeroes": true, 00:21:05.668 "zcopy": true, 00:21:05.668 "get_zone_info": false, 00:21:05.668 "zone_management": false, 00:21:05.668 "zone_append": false, 00:21:05.668 "compare": false, 00:21:05.668 "compare_and_write": false, 00:21:05.668 "abort": true, 00:21:05.668 "seek_hole": false, 00:21:05.668 "seek_data": false, 00:21:05.668 "copy": true, 00:21:05.668 "nvme_iov_md": false 00:21:05.668 }, 00:21:05.668 "memory_domains": [ 00:21:05.668 { 00:21:05.668 "dma_device_id": "system", 00:21:05.668 "dma_device_type": 1 00:21:05.668 }, 00:21:05.668 { 00:21:05.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.668 "dma_device_type": 2 00:21:05.668 } 00:21:05.668 ], 00:21:05.668 "driver_specific": {} 00:21:05.668 } 00:21:05.668 ] 00:21:05.668 00:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:05.668 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:05.668 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:05.668 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:05.927 [2024-07-25 00:47:28.332511] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:05.927 [2024-07-25 00:47:28.332682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:05.927 [2024-07-25 00:47:28.332848] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:05.927 [2024-07-25 00:47:28.334802] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.927 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.186 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.186 "name": "Existed_Raid", 00:21:06.186 "uuid": "e781a4e3-04e5-41ce-b504-264ddcb83be6", 00:21:06.186 "strip_size_kb": 64, 00:21:06.186 "state": "configuring", 00:21:06.186 "raid_level": "concat", 00:21:06.186 "superblock": true, 00:21:06.186 "num_base_bdevs": 3, 00:21:06.186 "num_base_bdevs_discovered": 2, 00:21:06.186 "num_base_bdevs_operational": 3, 00:21:06.186 "base_bdevs_list": [ 00:21:06.186 { 00:21:06.186 "name": "BaseBdev1", 00:21:06.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.186 "is_configured": false, 00:21:06.186 "data_offset": 0, 00:21:06.186 "data_size": 0 00:21:06.186 }, 00:21:06.186 { 00:21:06.186 "name": "BaseBdev2", 00:21:06.186 "uuid": "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e", 00:21:06.186 "is_configured": true, 00:21:06.186 "data_offset": 2048, 00:21:06.186 "data_size": 63488 00:21:06.186 }, 00:21:06.186 { 00:21:06.186 "name": "BaseBdev3", 00:21:06.187 "uuid": "e850fac7-0799-45eb-8505-79b8ebccb43c", 00:21:06.187 "is_configured": true, 00:21:06.187 "data_offset": 2048, 00:21:06.187 "data_size": 63488 00:21:06.187 } 00:21:06.187 ] 00:21:06.187 }' 00:21:06.187 00:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.187 00:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.446 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:06.705 [2024-07-25 00:47:29.332647] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:06.705 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:06.705 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.705 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.705 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:06.705 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:06.705 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:06.705 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.705 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.705 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.705 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.964 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.964 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.964 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.964 "name": "Existed_Raid", 00:21:06.964 "uuid": "e781a4e3-04e5-41ce-b504-264ddcb83be6", 00:21:06.964 "strip_size_kb": 64, 00:21:06.964 "state": "configuring", 00:21:06.964 "raid_level": "concat", 00:21:06.964 "superblock": true, 00:21:06.964 "num_base_bdevs": 3, 00:21:06.964 "num_base_bdevs_discovered": 1, 00:21:06.964 "num_base_bdevs_operational": 3, 00:21:06.964 "base_bdevs_list": [ 00:21:06.964 { 00:21:06.964 "name": "BaseBdev1", 00:21:06.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.965 "is_configured": false, 00:21:06.965 "data_offset": 0, 00:21:06.965 "data_size": 0 00:21:06.965 }, 00:21:06.965 { 00:21:06.965 "name": null, 00:21:06.965 "uuid": "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e", 00:21:06.965 "is_configured": false, 00:21:06.965 "data_offset": 2048, 00:21:06.965 "data_size": 63488 00:21:06.965 }, 00:21:06.965 { 00:21:06.965 "name": "BaseBdev3", 00:21:06.965 "uuid": "e850fac7-0799-45eb-8505-79b8ebccb43c", 00:21:06.965 "is_configured": true, 00:21:06.965 "data_offset": 2048, 00:21:06.965 "data_size": 63488 00:21:06.965 } 00:21:06.965 ] 00:21:06.965 }' 00:21:06.965 00:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.965 00:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.533 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.533 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:07.793 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:07.793 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:08.052 [2024-07-25 00:47:30.509579] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:08.052 BaseBdev1 00:21:08.052 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:08.052 00:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:08.052 00:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:08.052 00:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:08.052 00:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:08.052 00:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:08.052 00:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:08.052 00:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:08.312 [ 00:21:08.312 { 00:21:08.312 "name": "BaseBdev1", 00:21:08.312 "aliases": [ 00:21:08.312 "b3f63598-f5c0-4258-b7bc-fef10feb5dd9" 00:21:08.312 ], 00:21:08.312 "product_name": "Malloc disk", 00:21:08.312 "block_size": 512, 00:21:08.312 "num_blocks": 65536, 00:21:08.312 "uuid": "b3f63598-f5c0-4258-b7bc-fef10feb5dd9", 00:21:08.312 "assigned_rate_limits": { 00:21:08.312 "rw_ios_per_sec": 0, 00:21:08.312 "rw_mbytes_per_sec": 0, 00:21:08.312 "r_mbytes_per_sec": 0, 00:21:08.312 "w_mbytes_per_sec": 0 00:21:08.312 }, 00:21:08.312 "claimed": true, 00:21:08.312 "claim_type": "exclusive_write", 00:21:08.312 "zoned": false, 00:21:08.312 "supported_io_types": { 00:21:08.312 "read": true, 00:21:08.312 "write": true, 00:21:08.312 "unmap": true, 00:21:08.312 "flush": true, 00:21:08.312 "reset": true, 00:21:08.312 "nvme_admin": false, 00:21:08.312 "nvme_io": false, 00:21:08.312 "nvme_io_md": false, 00:21:08.312 "write_zeroes": true, 00:21:08.312 "zcopy": true, 00:21:08.312 "get_zone_info": false, 00:21:08.312 "zone_management": false, 00:21:08.312 "zone_append": false, 00:21:08.312 "compare": false, 00:21:08.312 "compare_and_write": false, 00:21:08.312 "abort": true, 00:21:08.312 "seek_hole": false, 00:21:08.312 "seek_data": false, 00:21:08.312 "copy": true, 00:21:08.312 "nvme_iov_md": false 00:21:08.312 }, 00:21:08.312 "memory_domains": [ 00:21:08.312 { 00:21:08.312 "dma_device_id": "system", 00:21:08.312 "dma_device_type": 1 00:21:08.312 }, 00:21:08.312 { 00:21:08.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.312 "dma_device_type": 2 00:21:08.312 } 00:21:08.312 ], 00:21:08.312 "driver_specific": {} 00:21:08.312 } 00:21:08.312 ] 00:21:08.312 00:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:08.312 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:08.312 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:08.312 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.313 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:08.313 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:08.313 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:08.313 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.313 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.313 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.313 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.313 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.313 00:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.572 00:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:08.572 "name": "Existed_Raid", 00:21:08.572 "uuid": "e781a4e3-04e5-41ce-b504-264ddcb83be6", 00:21:08.572 "strip_size_kb": 64, 00:21:08.572 "state": "configuring", 00:21:08.572 "raid_level": "concat", 00:21:08.572 "superblock": true, 00:21:08.572 "num_base_bdevs": 3, 00:21:08.572 "num_base_bdevs_discovered": 2, 00:21:08.572 "num_base_bdevs_operational": 3, 00:21:08.572 "base_bdevs_list": [ 00:21:08.572 { 00:21:08.572 "name": "BaseBdev1", 00:21:08.572 "uuid": "b3f63598-f5c0-4258-b7bc-fef10feb5dd9", 00:21:08.572 "is_configured": true, 00:21:08.572 "data_offset": 2048, 00:21:08.572 "data_size": 63488 00:21:08.572 }, 00:21:08.572 { 00:21:08.572 "name": null, 00:21:08.572 "uuid": "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e", 00:21:08.572 "is_configured": false, 00:21:08.572 "data_offset": 2048, 00:21:08.572 "data_size": 63488 00:21:08.572 }, 00:21:08.572 { 00:21:08.572 "name": "BaseBdev3", 00:21:08.572 "uuid": "e850fac7-0799-45eb-8505-79b8ebccb43c", 00:21:08.572 "is_configured": true, 00:21:08.572 "data_offset": 2048, 00:21:08.572 "data_size": 63488 00:21:08.572 } 00:21:08.572 ] 00:21:08.572 }' 00:21:08.572 00:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:08.572 00:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.141 00:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.141 00:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:09.400 00:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:09.400 00:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:09.660 [2024-07-25 00:47:32.141922] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.660 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.919 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.919 "name": "Existed_Raid", 00:21:09.919 "uuid": "e781a4e3-04e5-41ce-b504-264ddcb83be6", 00:21:09.919 "strip_size_kb": 64, 00:21:09.919 "state": "configuring", 00:21:09.919 "raid_level": "concat", 00:21:09.919 "superblock": true, 00:21:09.919 "num_base_bdevs": 3, 00:21:09.919 "num_base_bdevs_discovered": 1, 00:21:09.919 "num_base_bdevs_operational": 3, 00:21:09.919 "base_bdevs_list": [ 00:21:09.919 { 00:21:09.919 "name": "BaseBdev1", 00:21:09.919 "uuid": "b3f63598-f5c0-4258-b7bc-fef10feb5dd9", 00:21:09.919 "is_configured": true, 00:21:09.919 "data_offset": 2048, 00:21:09.919 "data_size": 63488 00:21:09.919 }, 00:21:09.919 { 00:21:09.919 "name": null, 00:21:09.919 "uuid": "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e", 00:21:09.919 "is_configured": false, 00:21:09.919 "data_offset": 2048, 00:21:09.919 "data_size": 63488 00:21:09.919 }, 00:21:09.919 { 00:21:09.919 "name": null, 00:21:09.919 "uuid": "e850fac7-0799-45eb-8505-79b8ebccb43c", 00:21:09.919 "is_configured": false, 00:21:09.919 "data_offset": 2048, 00:21:09.919 "data_size": 63488 00:21:09.919 } 00:21:09.919 ] 00:21:09.919 }' 00:21:09.919 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.919 00:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.488 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.488 00:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:10.488 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:10.488 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:10.748 [2024-07-25 00:47:33.218098] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.748 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.006 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:11.006 "name": "Existed_Raid", 00:21:11.006 "uuid": "e781a4e3-04e5-41ce-b504-264ddcb83be6", 00:21:11.006 "strip_size_kb": 64, 00:21:11.006 "state": "configuring", 00:21:11.006 "raid_level": "concat", 00:21:11.006 "superblock": true, 00:21:11.006 "num_base_bdevs": 3, 00:21:11.006 "num_base_bdevs_discovered": 2, 00:21:11.006 "num_base_bdevs_operational": 3, 00:21:11.006 "base_bdevs_list": [ 00:21:11.006 { 00:21:11.006 "name": "BaseBdev1", 00:21:11.006 "uuid": "b3f63598-f5c0-4258-b7bc-fef10feb5dd9", 00:21:11.006 "is_configured": true, 00:21:11.006 "data_offset": 2048, 00:21:11.006 "data_size": 63488 00:21:11.006 }, 00:21:11.006 { 00:21:11.006 "name": null, 00:21:11.006 "uuid": "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e", 00:21:11.006 "is_configured": false, 00:21:11.006 "data_offset": 2048, 00:21:11.006 "data_size": 63488 00:21:11.006 }, 00:21:11.006 { 00:21:11.006 "name": "BaseBdev3", 00:21:11.006 "uuid": "e850fac7-0799-45eb-8505-79b8ebccb43c", 00:21:11.006 "is_configured": true, 00:21:11.006 "data_offset": 2048, 00:21:11.006 "data_size": 63488 00:21:11.006 } 00:21:11.006 ] 00:21:11.006 }' 00:21:11.006 00:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:11.006 00:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.573 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.573 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:11.573 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:11.573 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:11.831 [2024-07-25 00:47:34.350345] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.831 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.089 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:12.089 "name": "Existed_Raid", 00:21:12.089 "uuid": "e781a4e3-04e5-41ce-b504-264ddcb83be6", 00:21:12.089 "strip_size_kb": 64, 00:21:12.089 "state": "configuring", 00:21:12.089 "raid_level": "concat", 00:21:12.089 "superblock": true, 00:21:12.089 "num_base_bdevs": 3, 00:21:12.089 "num_base_bdevs_discovered": 1, 00:21:12.089 "num_base_bdevs_operational": 3, 00:21:12.089 "base_bdevs_list": [ 00:21:12.089 { 00:21:12.089 "name": null, 00:21:12.089 "uuid": "b3f63598-f5c0-4258-b7bc-fef10feb5dd9", 00:21:12.089 "is_configured": false, 00:21:12.089 "data_offset": 2048, 00:21:12.089 "data_size": 63488 00:21:12.089 }, 00:21:12.089 { 00:21:12.089 "name": null, 00:21:12.089 "uuid": "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e", 00:21:12.089 "is_configured": false, 00:21:12.089 "data_offset": 2048, 00:21:12.089 "data_size": 63488 00:21:12.089 }, 00:21:12.089 { 00:21:12.089 "name": "BaseBdev3", 00:21:12.089 "uuid": "e850fac7-0799-45eb-8505-79b8ebccb43c", 00:21:12.089 "is_configured": true, 00:21:12.089 "data_offset": 2048, 00:21:12.089 "data_size": 63488 00:21:12.089 } 00:21:12.089 ] 00:21:12.089 }' 00:21:12.089 00:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:12.089 00:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.657 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.657 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:12.916 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:12.916 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:13.175 [2024-07-25 00:47:35.592941] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.175 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:13.175 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:13.175 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:13.175 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:13.175 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:13.175 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:13.175 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.175 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.175 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.175 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.176 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.176 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.176 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.176 "name": "Existed_Raid", 00:21:13.176 "uuid": "e781a4e3-04e5-41ce-b504-264ddcb83be6", 00:21:13.176 "strip_size_kb": 64, 00:21:13.176 "state": "configuring", 00:21:13.176 "raid_level": "concat", 00:21:13.176 "superblock": true, 00:21:13.176 "num_base_bdevs": 3, 00:21:13.176 "num_base_bdevs_discovered": 2, 00:21:13.176 "num_base_bdevs_operational": 3, 00:21:13.176 "base_bdevs_list": [ 00:21:13.176 { 00:21:13.176 "name": null, 00:21:13.176 "uuid": "b3f63598-f5c0-4258-b7bc-fef10feb5dd9", 00:21:13.176 "is_configured": false, 00:21:13.176 "data_offset": 2048, 00:21:13.176 "data_size": 63488 00:21:13.176 }, 00:21:13.176 { 00:21:13.176 "name": "BaseBdev2", 00:21:13.176 "uuid": "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e", 00:21:13.176 "is_configured": true, 00:21:13.176 "data_offset": 2048, 00:21:13.176 "data_size": 63488 00:21:13.176 }, 00:21:13.176 { 00:21:13.176 "name": "BaseBdev3", 00:21:13.176 "uuid": "e850fac7-0799-45eb-8505-79b8ebccb43c", 00:21:13.176 "is_configured": true, 00:21:13.176 "data_offset": 2048, 00:21:13.176 "data_size": 63488 00:21:13.176 } 00:21:13.176 ] 00:21:13.176 }' 00:21:13.176 00:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.176 00:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.744 00:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.744 00:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:14.003 00:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:14.003 00:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:14.003 00:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.003 00:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b3f63598-f5c0-4258-b7bc-fef10feb5dd9 00:21:14.263 [2024-07-25 00:47:36.879377] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:14.263 [2024-07-25 00:47:36.879735] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:21:14.263 [2024-07-25 00:47:36.879871] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:14.263 [2024-07-25 00:47:36.880021] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:14.263 [2024-07-25 00:47:36.880424] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:21:14.263 [2024-07-25 00:47:36.880464] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:21:14.263 NewBaseBdev 00:21:14.263 [2024-07-25 00:47:36.880688] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.263 00:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:14.263 00:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:14.263 00:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:14.263 00:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:14.263 00:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:14.263 00:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:14.263 00:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:14.522 00:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:14.782 [ 00:21:14.782 { 00:21:14.782 "name": "NewBaseBdev", 00:21:14.782 "aliases": [ 00:21:14.782 "b3f63598-f5c0-4258-b7bc-fef10feb5dd9" 00:21:14.782 ], 00:21:14.782 "product_name": "Malloc disk", 00:21:14.782 "block_size": 512, 00:21:14.782 "num_blocks": 65536, 00:21:14.782 "uuid": "b3f63598-f5c0-4258-b7bc-fef10feb5dd9", 00:21:14.782 "assigned_rate_limits": { 00:21:14.782 "rw_ios_per_sec": 0, 00:21:14.782 "rw_mbytes_per_sec": 0, 00:21:14.782 "r_mbytes_per_sec": 0, 00:21:14.782 "w_mbytes_per_sec": 0 00:21:14.782 }, 00:21:14.782 "claimed": true, 00:21:14.782 "claim_type": "exclusive_write", 00:21:14.782 "zoned": false, 00:21:14.782 "supported_io_types": { 00:21:14.782 "read": true, 00:21:14.782 "write": true, 00:21:14.782 "unmap": true, 00:21:14.782 "flush": true, 00:21:14.782 "reset": true, 00:21:14.782 "nvme_admin": false, 00:21:14.782 "nvme_io": false, 00:21:14.782 "nvme_io_md": false, 00:21:14.782 "write_zeroes": true, 00:21:14.782 "zcopy": true, 00:21:14.782 "get_zone_info": false, 00:21:14.782 "zone_management": false, 00:21:14.782 "zone_append": false, 00:21:14.782 "compare": false, 00:21:14.782 "compare_and_write": false, 00:21:14.782 "abort": true, 00:21:14.782 "seek_hole": false, 00:21:14.782 "seek_data": false, 00:21:14.782 "copy": true, 00:21:14.782 "nvme_iov_md": false 00:21:14.782 }, 00:21:14.782 "memory_domains": [ 00:21:14.782 { 00:21:14.782 "dma_device_id": "system", 00:21:14.782 "dma_device_type": 1 00:21:14.782 }, 00:21:14.782 { 00:21:14.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.782 "dma_device_type": 2 00:21:14.782 } 00:21:14.782 ], 00:21:14.782 "driver_specific": {} 00:21:14.782 } 00:21:14.782 ] 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.782 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.040 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:15.040 "name": "Existed_Raid", 00:21:15.040 "uuid": "e781a4e3-04e5-41ce-b504-264ddcb83be6", 00:21:15.040 "strip_size_kb": 64, 00:21:15.040 "state": "online", 00:21:15.040 "raid_level": "concat", 00:21:15.040 "superblock": true, 00:21:15.040 "num_base_bdevs": 3, 00:21:15.040 "num_base_bdevs_discovered": 3, 00:21:15.040 "num_base_bdevs_operational": 3, 00:21:15.040 "base_bdevs_list": [ 00:21:15.040 { 00:21:15.040 "name": "NewBaseBdev", 00:21:15.040 "uuid": "b3f63598-f5c0-4258-b7bc-fef10feb5dd9", 00:21:15.040 "is_configured": true, 00:21:15.040 "data_offset": 2048, 00:21:15.040 "data_size": 63488 00:21:15.040 }, 00:21:15.040 { 00:21:15.040 "name": "BaseBdev2", 00:21:15.040 "uuid": "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e", 00:21:15.040 "is_configured": true, 00:21:15.040 "data_offset": 2048, 00:21:15.040 "data_size": 63488 00:21:15.040 }, 00:21:15.040 { 00:21:15.040 "name": "BaseBdev3", 00:21:15.040 "uuid": "e850fac7-0799-45eb-8505-79b8ebccb43c", 00:21:15.040 "is_configured": true, 00:21:15.040 "data_offset": 2048, 00:21:15.040 "data_size": 63488 00:21:15.040 } 00:21:15.040 ] 00:21:15.040 }' 00:21:15.040 00:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:15.040 00:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:15.606 [2024-07-25 00:47:38.191899] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:15.606 "name": "Existed_Raid", 00:21:15.606 "aliases": [ 00:21:15.606 "e781a4e3-04e5-41ce-b504-264ddcb83be6" 00:21:15.606 ], 00:21:15.606 "product_name": "Raid Volume", 00:21:15.606 "block_size": 512, 00:21:15.606 "num_blocks": 190464, 00:21:15.606 "uuid": "e781a4e3-04e5-41ce-b504-264ddcb83be6", 00:21:15.606 "assigned_rate_limits": { 00:21:15.606 "rw_ios_per_sec": 0, 00:21:15.606 "rw_mbytes_per_sec": 0, 00:21:15.606 "r_mbytes_per_sec": 0, 00:21:15.606 "w_mbytes_per_sec": 0 00:21:15.606 }, 00:21:15.606 "claimed": false, 00:21:15.606 "zoned": false, 00:21:15.606 "supported_io_types": { 00:21:15.606 "read": true, 00:21:15.606 "write": true, 00:21:15.606 "unmap": true, 00:21:15.606 "flush": true, 00:21:15.606 "reset": true, 00:21:15.606 "nvme_admin": false, 00:21:15.606 "nvme_io": false, 00:21:15.606 "nvme_io_md": false, 00:21:15.606 "write_zeroes": true, 00:21:15.606 "zcopy": false, 00:21:15.606 "get_zone_info": false, 00:21:15.606 "zone_management": false, 00:21:15.606 "zone_append": false, 00:21:15.606 "compare": false, 00:21:15.606 "compare_and_write": false, 00:21:15.606 "abort": false, 00:21:15.606 "seek_hole": false, 00:21:15.606 "seek_data": false, 00:21:15.606 "copy": false, 00:21:15.606 "nvme_iov_md": false 00:21:15.606 }, 00:21:15.606 "memory_domains": [ 00:21:15.606 { 00:21:15.606 "dma_device_id": "system", 00:21:15.606 "dma_device_type": 1 00:21:15.606 }, 00:21:15.606 { 00:21:15.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.606 "dma_device_type": 2 00:21:15.606 }, 00:21:15.606 { 00:21:15.606 "dma_device_id": "system", 00:21:15.606 "dma_device_type": 1 00:21:15.606 }, 00:21:15.606 { 00:21:15.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.606 "dma_device_type": 2 00:21:15.606 }, 00:21:15.606 { 00:21:15.606 "dma_device_id": "system", 00:21:15.606 "dma_device_type": 1 00:21:15.606 }, 00:21:15.606 { 00:21:15.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.606 "dma_device_type": 2 00:21:15.606 } 00:21:15.606 ], 00:21:15.606 "driver_specific": { 00:21:15.606 "raid": { 00:21:15.606 "uuid": "e781a4e3-04e5-41ce-b504-264ddcb83be6", 00:21:15.606 "strip_size_kb": 64, 00:21:15.606 "state": "online", 00:21:15.606 "raid_level": "concat", 00:21:15.606 "superblock": true, 00:21:15.606 "num_base_bdevs": 3, 00:21:15.606 "num_base_bdevs_discovered": 3, 00:21:15.606 "num_base_bdevs_operational": 3, 00:21:15.606 "base_bdevs_list": [ 00:21:15.606 { 00:21:15.606 "name": "NewBaseBdev", 00:21:15.606 "uuid": "b3f63598-f5c0-4258-b7bc-fef10feb5dd9", 00:21:15.606 "is_configured": true, 00:21:15.606 "data_offset": 2048, 00:21:15.606 "data_size": 63488 00:21:15.606 }, 00:21:15.606 { 00:21:15.606 "name": "BaseBdev2", 00:21:15.606 "uuid": "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e", 00:21:15.606 "is_configured": true, 00:21:15.606 "data_offset": 2048, 00:21:15.606 "data_size": 63488 00:21:15.606 }, 00:21:15.606 { 00:21:15.606 "name": "BaseBdev3", 00:21:15.606 "uuid": "e850fac7-0799-45eb-8505-79b8ebccb43c", 00:21:15.606 "is_configured": true, 00:21:15.606 "data_offset": 2048, 00:21:15.606 "data_size": 63488 00:21:15.606 } 00:21:15.606 ] 00:21:15.606 } 00:21:15.606 } 00:21:15.606 }' 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:15.606 BaseBdev2 00:21:15.606 BaseBdev3' 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:15.606 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:15.865 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:15.865 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:15.865 "name": "NewBaseBdev", 00:21:15.865 "aliases": [ 00:21:15.865 "b3f63598-f5c0-4258-b7bc-fef10feb5dd9" 00:21:15.865 ], 00:21:15.865 "product_name": "Malloc disk", 00:21:15.865 "block_size": 512, 00:21:15.865 "num_blocks": 65536, 00:21:15.865 "uuid": "b3f63598-f5c0-4258-b7bc-fef10feb5dd9", 00:21:15.865 "assigned_rate_limits": { 00:21:15.865 "rw_ios_per_sec": 0, 00:21:15.865 "rw_mbytes_per_sec": 0, 00:21:15.865 "r_mbytes_per_sec": 0, 00:21:15.865 "w_mbytes_per_sec": 0 00:21:15.865 }, 00:21:15.865 "claimed": true, 00:21:15.865 "claim_type": "exclusive_write", 00:21:15.865 "zoned": false, 00:21:15.865 "supported_io_types": { 00:21:15.865 "read": true, 00:21:15.865 "write": true, 00:21:15.865 "unmap": true, 00:21:15.865 "flush": true, 00:21:15.865 "reset": true, 00:21:15.865 "nvme_admin": false, 00:21:15.865 "nvme_io": false, 00:21:15.865 "nvme_io_md": false, 00:21:15.865 "write_zeroes": true, 00:21:15.865 "zcopy": true, 00:21:15.865 "get_zone_info": false, 00:21:15.865 "zone_management": false, 00:21:15.865 "zone_append": false, 00:21:15.865 "compare": false, 00:21:15.865 "compare_and_write": false, 00:21:15.865 "abort": true, 00:21:15.865 "seek_hole": false, 00:21:15.865 "seek_data": false, 00:21:15.865 "copy": true, 00:21:15.865 "nvme_iov_md": false 00:21:15.865 }, 00:21:15.865 "memory_domains": [ 00:21:15.865 { 00:21:15.865 "dma_device_id": "system", 00:21:15.865 "dma_device_type": 1 00:21:15.865 }, 00:21:15.865 { 00:21:15.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.865 "dma_device_type": 2 00:21:15.865 } 00:21:15.865 ], 00:21:15.865 "driver_specific": {} 00:21:15.865 }' 00:21:15.865 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.124 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.124 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:16.124 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.124 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.124 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:16.124 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.124 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.124 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:16.124 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.383 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.383 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:16.383 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:16.383 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:16.383 00:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:16.643 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:16.643 "name": "BaseBdev2", 00:21:16.643 "aliases": [ 00:21:16.643 "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e" 00:21:16.643 ], 00:21:16.643 "product_name": "Malloc disk", 00:21:16.643 "block_size": 512, 00:21:16.643 "num_blocks": 65536, 00:21:16.643 "uuid": "a95b9b91-5c77-4e8a-b9e5-1abf228e0e1e", 00:21:16.643 "assigned_rate_limits": { 00:21:16.643 "rw_ios_per_sec": 0, 00:21:16.643 "rw_mbytes_per_sec": 0, 00:21:16.643 "r_mbytes_per_sec": 0, 00:21:16.643 "w_mbytes_per_sec": 0 00:21:16.643 }, 00:21:16.643 "claimed": true, 00:21:16.643 "claim_type": "exclusive_write", 00:21:16.643 "zoned": false, 00:21:16.643 "supported_io_types": { 00:21:16.643 "read": true, 00:21:16.643 "write": true, 00:21:16.643 "unmap": true, 00:21:16.643 "flush": true, 00:21:16.643 "reset": true, 00:21:16.643 "nvme_admin": false, 00:21:16.643 "nvme_io": false, 00:21:16.643 "nvme_io_md": false, 00:21:16.643 "write_zeroes": true, 00:21:16.643 "zcopy": true, 00:21:16.643 "get_zone_info": false, 00:21:16.643 "zone_management": false, 00:21:16.643 "zone_append": false, 00:21:16.643 "compare": false, 00:21:16.643 "compare_and_write": false, 00:21:16.643 "abort": true, 00:21:16.643 "seek_hole": false, 00:21:16.643 "seek_data": false, 00:21:16.643 "copy": true, 00:21:16.643 "nvme_iov_md": false 00:21:16.643 }, 00:21:16.643 "memory_domains": [ 00:21:16.643 { 00:21:16.643 "dma_device_id": "system", 00:21:16.643 "dma_device_type": 1 00:21:16.643 }, 00:21:16.643 { 00:21:16.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.643 "dma_device_type": 2 00:21:16.643 } 00:21:16.643 ], 00:21:16.643 "driver_specific": {} 00:21:16.643 }' 00:21:16.643 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.643 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.643 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:16.643 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.643 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.643 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:16.643 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.643 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.902 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:16.902 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.902 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.902 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:16.902 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:16.902 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:16.902 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:17.162 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:17.162 "name": "BaseBdev3", 00:21:17.162 "aliases": [ 00:21:17.162 "e850fac7-0799-45eb-8505-79b8ebccb43c" 00:21:17.162 ], 00:21:17.162 "product_name": "Malloc disk", 00:21:17.162 "block_size": 512, 00:21:17.162 "num_blocks": 65536, 00:21:17.162 "uuid": "e850fac7-0799-45eb-8505-79b8ebccb43c", 00:21:17.162 "assigned_rate_limits": { 00:21:17.162 "rw_ios_per_sec": 0, 00:21:17.162 "rw_mbytes_per_sec": 0, 00:21:17.162 "r_mbytes_per_sec": 0, 00:21:17.162 "w_mbytes_per_sec": 0 00:21:17.162 }, 00:21:17.162 "claimed": true, 00:21:17.162 "claim_type": "exclusive_write", 00:21:17.162 "zoned": false, 00:21:17.162 "supported_io_types": { 00:21:17.162 "read": true, 00:21:17.162 "write": true, 00:21:17.162 "unmap": true, 00:21:17.162 "flush": true, 00:21:17.162 "reset": true, 00:21:17.162 "nvme_admin": false, 00:21:17.162 "nvme_io": false, 00:21:17.162 "nvme_io_md": false, 00:21:17.162 "write_zeroes": true, 00:21:17.162 "zcopy": true, 00:21:17.162 "get_zone_info": false, 00:21:17.162 "zone_management": false, 00:21:17.162 "zone_append": false, 00:21:17.162 "compare": false, 00:21:17.162 "compare_and_write": false, 00:21:17.162 "abort": true, 00:21:17.162 "seek_hole": false, 00:21:17.162 "seek_data": false, 00:21:17.162 "copy": true, 00:21:17.162 "nvme_iov_md": false 00:21:17.162 }, 00:21:17.162 "memory_domains": [ 00:21:17.162 { 00:21:17.162 "dma_device_id": "system", 00:21:17.162 "dma_device_type": 1 00:21:17.162 }, 00:21:17.162 { 00:21:17.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.162 "dma_device_type": 2 00:21:17.162 } 00:21:17.162 ], 00:21:17.162 "driver_specific": {} 00:21:17.162 }' 00:21:17.162 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:17.162 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:17.162 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:17.162 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:17.162 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:17.422 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:17.422 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:17.422 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:17.422 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:17.422 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:17.422 00:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:17.422 00:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:17.422 00:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:17.681 [2024-07-25 00:47:40.276013] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:17.681 [2024-07-25 00:47:40.276213] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:17.681 [2024-07-25 00:47:40.276364] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:17.681 [2024-07-25 00:47:40.276504] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:17.681 [2024-07-25 00:47:40.276584] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:21:17.681 00:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 130036 00:21:17.681 00:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 130036 ']' 00:21:17.681 00:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 130036 00:21:17.681 00:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:21:17.681 00:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.681 00:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130036 00:21:17.681 00:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:17.681 00:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:17.681 00:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130036' 00:21:17.681 killing process with pid 130036 00:21:17.681 00:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 130036 00:21:17.681 [2024-07-25 00:47:40.326242] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:17.681 00:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 130036 00:21:18.249 [2024-07-25 00:47:40.613701] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:19.628 ************************************ 00:21:19.628 END TEST raid_state_function_test_sb 00:21:19.628 ************************************ 00:21:19.628 00:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:19.628 00:21:19.628 real 0m27.994s 00:21:19.628 user 0m50.046s 00:21:19.628 sys 0m4.463s 00:21:19.628 00:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:19.628 00:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.628 00:47:41 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:21:19.628 00:47:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:19.628 00:47:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:19.628 00:47:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:19.628 ************************************ 00:21:19.628 START TEST raid_superblock_test 00:21:19.628 ************************************ 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=130994 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 130994 /var/tmp/spdk-raid.sock 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 130994 ']' 00:21:19.628 00:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:19.628 00:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.628 00:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:19.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:19.628 00:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.628 00:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.628 [2024-07-25 00:47:42.081927] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:21:19.628 [2024-07-25 00:47:42.082479] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130994 ] 00:21:19.628 [2024-07-25 00:47:42.258758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.887 [2024-07-25 00:47:42.435171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.146 [2024-07-25 00:47:42.633962] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:20.405 00:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.405 00:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:21:20.405 00:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:20.405 00:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:20.405 00:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:20.405 00:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:20.405 00:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:20.405 00:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:20.405 00:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:20.405 00:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:20.405 00:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:20.664 malloc1 00:21:20.664 00:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:20.924 [2024-07-25 00:47:43.491540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:20.924 [2024-07-25 00:47:43.491813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.924 [2024-07-25 00:47:43.491900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:20.924 [2024-07-25 00:47:43.491996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.924 [2024-07-25 00:47:43.494318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.924 [2024-07-25 00:47:43.494479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:20.924 pt1 00:21:20.924 00:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:20.924 00:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:20.924 00:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:20.924 00:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:20.924 00:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:20.924 00:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:20.924 00:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:20.924 00:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:20.924 00:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:21.182 malloc2 00:21:21.182 00:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:21.439 [2024-07-25 00:47:44.064991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:21.439 [2024-07-25 00:47:44.065231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.439 [2024-07-25 00:47:44.065325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:21.439 [2024-07-25 00:47:44.065415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.439 [2024-07-25 00:47:44.067720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.439 [2024-07-25 00:47:44.067887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:21.439 pt2 00:21:21.439 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:21.439 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:21.439 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:21.439 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:21.439 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:21.439 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:21.439 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:21.439 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:21.439 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:21.697 malloc3 00:21:21.697 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:21.956 [2024-07-25 00:47:44.533307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:21.956 [2024-07-25 00:47:44.533506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.956 [2024-07-25 00:47:44.533567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:21.956 [2024-07-25 00:47:44.533663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.956 [2024-07-25 00:47:44.535827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.956 [2024-07-25 00:47:44.535985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:21.956 pt3 00:21:21.956 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:21.956 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:21.956 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:22.215 [2024-07-25 00:47:44.697366] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:22.215 [2024-07-25 00:47:44.699283] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:22.215 [2024-07-25 00:47:44.699464] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:22.215 [2024-07-25 00:47:44.699650] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:21:22.215 [2024-07-25 00:47:44.699814] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:22.215 [2024-07-25 00:47:44.699966] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:22.215 [2024-07-25 00:47:44.700351] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:21:22.215 [2024-07-25 00:47:44.700457] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:21:22.215 [2024-07-25 00:47:44.700660] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.215 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.473 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.473 "name": "raid_bdev1", 00:21:22.473 "uuid": "b3c83b7f-3d17-4e42-aac0-494a1124b897", 00:21:22.473 "strip_size_kb": 64, 00:21:22.473 "state": "online", 00:21:22.473 "raid_level": "concat", 00:21:22.473 "superblock": true, 00:21:22.473 "num_base_bdevs": 3, 00:21:22.473 "num_base_bdevs_discovered": 3, 00:21:22.473 "num_base_bdevs_operational": 3, 00:21:22.473 "base_bdevs_list": [ 00:21:22.473 { 00:21:22.473 "name": "pt1", 00:21:22.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.473 "is_configured": true, 00:21:22.473 "data_offset": 2048, 00:21:22.473 "data_size": 63488 00:21:22.473 }, 00:21:22.473 { 00:21:22.473 "name": "pt2", 00:21:22.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.473 "is_configured": true, 00:21:22.473 "data_offset": 2048, 00:21:22.473 "data_size": 63488 00:21:22.473 }, 00:21:22.473 { 00:21:22.473 "name": "pt3", 00:21:22.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:22.473 "is_configured": true, 00:21:22.473 "data_offset": 2048, 00:21:22.473 "data_size": 63488 00:21:22.473 } 00:21:22.473 ] 00:21:22.473 }' 00:21:22.473 00:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.473 00:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.040 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:23.040 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:23.040 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:23.040 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:23.040 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:23.040 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:23.040 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:23.040 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:23.299 [2024-07-25 00:47:45.741778] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.299 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:23.299 "name": "raid_bdev1", 00:21:23.299 "aliases": [ 00:21:23.299 "b3c83b7f-3d17-4e42-aac0-494a1124b897" 00:21:23.299 ], 00:21:23.299 "product_name": "Raid Volume", 00:21:23.299 "block_size": 512, 00:21:23.299 "num_blocks": 190464, 00:21:23.299 "uuid": "b3c83b7f-3d17-4e42-aac0-494a1124b897", 00:21:23.299 "assigned_rate_limits": { 00:21:23.299 "rw_ios_per_sec": 0, 00:21:23.299 "rw_mbytes_per_sec": 0, 00:21:23.299 "r_mbytes_per_sec": 0, 00:21:23.299 "w_mbytes_per_sec": 0 00:21:23.299 }, 00:21:23.299 "claimed": false, 00:21:23.299 "zoned": false, 00:21:23.299 "supported_io_types": { 00:21:23.299 "read": true, 00:21:23.299 "write": true, 00:21:23.299 "unmap": true, 00:21:23.299 "flush": true, 00:21:23.299 "reset": true, 00:21:23.299 "nvme_admin": false, 00:21:23.299 "nvme_io": false, 00:21:23.299 "nvme_io_md": false, 00:21:23.299 "write_zeroes": true, 00:21:23.299 "zcopy": false, 00:21:23.299 "get_zone_info": false, 00:21:23.299 "zone_management": false, 00:21:23.299 "zone_append": false, 00:21:23.299 "compare": false, 00:21:23.299 "compare_and_write": false, 00:21:23.299 "abort": false, 00:21:23.299 "seek_hole": false, 00:21:23.299 "seek_data": false, 00:21:23.299 "copy": false, 00:21:23.299 "nvme_iov_md": false 00:21:23.299 }, 00:21:23.299 "memory_domains": [ 00:21:23.299 { 00:21:23.299 "dma_device_id": "system", 00:21:23.299 "dma_device_type": 1 00:21:23.299 }, 00:21:23.299 { 00:21:23.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.299 "dma_device_type": 2 00:21:23.299 }, 00:21:23.299 { 00:21:23.299 "dma_device_id": "system", 00:21:23.299 "dma_device_type": 1 00:21:23.299 }, 00:21:23.299 { 00:21:23.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.299 "dma_device_type": 2 00:21:23.299 }, 00:21:23.299 { 00:21:23.299 "dma_device_id": "system", 00:21:23.299 "dma_device_type": 1 00:21:23.299 }, 00:21:23.299 { 00:21:23.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.299 "dma_device_type": 2 00:21:23.299 } 00:21:23.299 ], 00:21:23.299 "driver_specific": { 00:21:23.299 "raid": { 00:21:23.299 "uuid": "b3c83b7f-3d17-4e42-aac0-494a1124b897", 00:21:23.299 "strip_size_kb": 64, 00:21:23.299 "state": "online", 00:21:23.299 "raid_level": "concat", 00:21:23.299 "superblock": true, 00:21:23.299 "num_base_bdevs": 3, 00:21:23.299 "num_base_bdevs_discovered": 3, 00:21:23.299 "num_base_bdevs_operational": 3, 00:21:23.299 "base_bdevs_list": [ 00:21:23.299 { 00:21:23.299 "name": "pt1", 00:21:23.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.299 "is_configured": true, 00:21:23.299 "data_offset": 2048, 00:21:23.299 "data_size": 63488 00:21:23.299 }, 00:21:23.299 { 00:21:23.299 "name": "pt2", 00:21:23.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.299 "is_configured": true, 00:21:23.299 "data_offset": 2048, 00:21:23.299 "data_size": 63488 00:21:23.299 }, 00:21:23.299 { 00:21:23.299 "name": "pt3", 00:21:23.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:23.299 "is_configured": true, 00:21:23.299 "data_offset": 2048, 00:21:23.299 "data_size": 63488 00:21:23.299 } 00:21:23.299 ] 00:21:23.299 } 00:21:23.299 } 00:21:23.299 }' 00:21:23.299 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:23.299 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:23.299 pt2 00:21:23.299 pt3' 00:21:23.299 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:23.299 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:23.299 00:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:23.558 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:23.558 "name": "pt1", 00:21:23.558 "aliases": [ 00:21:23.558 "00000000-0000-0000-0000-000000000001" 00:21:23.558 ], 00:21:23.558 "product_name": "passthru", 00:21:23.558 "block_size": 512, 00:21:23.558 "num_blocks": 65536, 00:21:23.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.558 "assigned_rate_limits": { 00:21:23.558 "rw_ios_per_sec": 0, 00:21:23.558 "rw_mbytes_per_sec": 0, 00:21:23.558 "r_mbytes_per_sec": 0, 00:21:23.558 "w_mbytes_per_sec": 0 00:21:23.558 }, 00:21:23.558 "claimed": true, 00:21:23.558 "claim_type": "exclusive_write", 00:21:23.558 "zoned": false, 00:21:23.558 "supported_io_types": { 00:21:23.558 "read": true, 00:21:23.558 "write": true, 00:21:23.558 "unmap": true, 00:21:23.558 "flush": true, 00:21:23.558 "reset": true, 00:21:23.558 "nvme_admin": false, 00:21:23.558 "nvme_io": false, 00:21:23.558 "nvme_io_md": false, 00:21:23.558 "write_zeroes": true, 00:21:23.558 "zcopy": true, 00:21:23.558 "get_zone_info": false, 00:21:23.558 "zone_management": false, 00:21:23.558 "zone_append": false, 00:21:23.558 "compare": false, 00:21:23.558 "compare_and_write": false, 00:21:23.558 "abort": true, 00:21:23.558 "seek_hole": false, 00:21:23.558 "seek_data": false, 00:21:23.558 "copy": true, 00:21:23.558 "nvme_iov_md": false 00:21:23.558 }, 00:21:23.558 "memory_domains": [ 00:21:23.558 { 00:21:23.558 "dma_device_id": "system", 00:21:23.558 "dma_device_type": 1 00:21:23.558 }, 00:21:23.558 { 00:21:23.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.558 "dma_device_type": 2 00:21:23.558 } 00:21:23.558 ], 00:21:23.558 "driver_specific": { 00:21:23.558 "passthru": { 00:21:23.558 "name": "pt1", 00:21:23.558 "base_bdev_name": "malloc1" 00:21:23.558 } 00:21:23.558 } 00:21:23.558 }' 00:21:23.558 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:23.558 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:23.558 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:23.558 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:23.558 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:23.816 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:23.816 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:23.816 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:23.816 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:23.816 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:23.816 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:23.816 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:23.816 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:23.816 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:23.816 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:24.075 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:24.075 "name": "pt2", 00:21:24.075 "aliases": [ 00:21:24.075 "00000000-0000-0000-0000-000000000002" 00:21:24.075 ], 00:21:24.075 "product_name": "passthru", 00:21:24.075 "block_size": 512, 00:21:24.075 "num_blocks": 65536, 00:21:24.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.075 "assigned_rate_limits": { 00:21:24.075 "rw_ios_per_sec": 0, 00:21:24.075 "rw_mbytes_per_sec": 0, 00:21:24.075 "r_mbytes_per_sec": 0, 00:21:24.075 "w_mbytes_per_sec": 0 00:21:24.075 }, 00:21:24.075 "claimed": true, 00:21:24.075 "claim_type": "exclusive_write", 00:21:24.075 "zoned": false, 00:21:24.075 "supported_io_types": { 00:21:24.075 "read": true, 00:21:24.075 "write": true, 00:21:24.075 "unmap": true, 00:21:24.075 "flush": true, 00:21:24.075 "reset": true, 00:21:24.075 "nvme_admin": false, 00:21:24.075 "nvme_io": false, 00:21:24.075 "nvme_io_md": false, 00:21:24.075 "write_zeroes": true, 00:21:24.075 "zcopy": true, 00:21:24.075 "get_zone_info": false, 00:21:24.075 "zone_management": false, 00:21:24.075 "zone_append": false, 00:21:24.075 "compare": false, 00:21:24.075 "compare_and_write": false, 00:21:24.075 "abort": true, 00:21:24.075 "seek_hole": false, 00:21:24.075 "seek_data": false, 00:21:24.075 "copy": true, 00:21:24.075 "nvme_iov_md": false 00:21:24.075 }, 00:21:24.075 "memory_domains": [ 00:21:24.075 { 00:21:24.075 "dma_device_id": "system", 00:21:24.075 "dma_device_type": 1 00:21:24.075 }, 00:21:24.075 { 00:21:24.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.075 "dma_device_type": 2 00:21:24.075 } 00:21:24.075 ], 00:21:24.075 "driver_specific": { 00:21:24.075 "passthru": { 00:21:24.075 "name": "pt2", 00:21:24.075 "base_bdev_name": "malloc2" 00:21:24.075 } 00:21:24.075 } 00:21:24.075 }' 00:21:24.075 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:24.334 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:24.334 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:24.334 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:24.334 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:24.334 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:24.334 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:24.334 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:24.334 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:24.334 00:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:24.592 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:24.592 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:24.592 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:24.592 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:24.592 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:24.851 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:24.851 "name": "pt3", 00:21:24.851 "aliases": [ 00:21:24.851 "00000000-0000-0000-0000-000000000003" 00:21:24.851 ], 00:21:24.851 "product_name": "passthru", 00:21:24.851 "block_size": 512, 00:21:24.851 "num_blocks": 65536, 00:21:24.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:24.851 "assigned_rate_limits": { 00:21:24.851 "rw_ios_per_sec": 0, 00:21:24.851 "rw_mbytes_per_sec": 0, 00:21:24.851 "r_mbytes_per_sec": 0, 00:21:24.851 "w_mbytes_per_sec": 0 00:21:24.851 }, 00:21:24.851 "claimed": true, 00:21:24.851 "claim_type": "exclusive_write", 00:21:24.851 "zoned": false, 00:21:24.851 "supported_io_types": { 00:21:24.851 "read": true, 00:21:24.851 "write": true, 00:21:24.851 "unmap": true, 00:21:24.851 "flush": true, 00:21:24.851 "reset": true, 00:21:24.851 "nvme_admin": false, 00:21:24.851 "nvme_io": false, 00:21:24.851 "nvme_io_md": false, 00:21:24.851 "write_zeroes": true, 00:21:24.851 "zcopy": true, 00:21:24.851 "get_zone_info": false, 00:21:24.851 "zone_management": false, 00:21:24.851 "zone_append": false, 00:21:24.851 "compare": false, 00:21:24.851 "compare_and_write": false, 00:21:24.851 "abort": true, 00:21:24.851 "seek_hole": false, 00:21:24.851 "seek_data": false, 00:21:24.851 "copy": true, 00:21:24.851 "nvme_iov_md": false 00:21:24.851 }, 00:21:24.851 "memory_domains": [ 00:21:24.851 { 00:21:24.851 "dma_device_id": "system", 00:21:24.851 "dma_device_type": 1 00:21:24.851 }, 00:21:24.851 { 00:21:24.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.851 "dma_device_type": 2 00:21:24.851 } 00:21:24.851 ], 00:21:24.851 "driver_specific": { 00:21:24.851 "passthru": { 00:21:24.851 "name": "pt3", 00:21:24.851 "base_bdev_name": "malloc3" 00:21:24.851 } 00:21:24.851 } 00:21:24.851 }' 00:21:24.851 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:24.851 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:24.851 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:24.851 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:24.851 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:24.851 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:24.851 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.111 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.111 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:25.111 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.111 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.111 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:25.111 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:25.111 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:25.412 [2024-07-25 00:47:47.894140] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:25.412 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b3c83b7f-3d17-4e42-aac0-494a1124b897 00:21:25.412 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z b3c83b7f-3d17-4e42-aac0-494a1124b897 ']' 00:21:25.412 00:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:25.694 [2024-07-25 00:47:48.125979] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:25.694 [2024-07-25 00:47:48.126126] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:25.694 [2024-07-25 00:47:48.126341] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.694 [2024-07-25 00:47:48.126482] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.694 [2024-07-25 00:47:48.126558] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:21:25.694 00:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.694 00:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:25.953 00:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:25.953 00:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:25.953 00:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:25.953 00:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:26.212 00:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:26.212 00:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:26.470 00:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:26.470 00:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:26.470 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:26.470 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:26.730 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:26.989 [2024-07-25 00:47:49.470796] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:26.989 [2024-07-25 00:47:49.473204] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:26.989 [2024-07-25 00:47:49.473413] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:26.989 [2024-07-25 00:47:49.473505] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:26.989 [2024-07-25 00:47:49.473689] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:26.989 [2024-07-25 00:47:49.473756] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:26.989 [2024-07-25 00:47:49.473895] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:26.989 [2024-07-25 00:47:49.473964] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:21:26.989 request: 00:21:26.989 { 00:21:26.989 "name": "raid_bdev1", 00:21:26.989 "raid_level": "concat", 00:21:26.989 "base_bdevs": [ 00:21:26.989 "malloc1", 00:21:26.989 "malloc2", 00:21:26.989 "malloc3" 00:21:26.989 ], 00:21:26.989 "strip_size_kb": 64, 00:21:26.989 "superblock": false, 00:21:26.989 "method": "bdev_raid_create", 00:21:26.989 "req_id": 1 00:21:26.989 } 00:21:26.989 Got JSON-RPC error response 00:21:26.989 response: 00:21:26.989 { 00:21:26.989 "code": -17, 00:21:26.989 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:26.989 } 00:21:26.989 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:26.989 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:26.989 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:26.989 00:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:26.989 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.989 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:27.248 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:27.248 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:27.248 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:27.248 [2024-07-25 00:47:49.822775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:27.248 [2024-07-25 00:47:49.822954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.248 [2024-07-25 00:47:49.823041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:27.248 [2024-07-25 00:47:49.823123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.248 [2024-07-25 00:47:49.825859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.248 [2024-07-25 00:47:49.826005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:27.249 [2024-07-25 00:47:49.826198] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:27.249 [2024-07-25 00:47:49.826346] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:27.249 pt1 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.249 00:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.508 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.508 "name": "raid_bdev1", 00:21:27.508 "uuid": "b3c83b7f-3d17-4e42-aac0-494a1124b897", 00:21:27.508 "strip_size_kb": 64, 00:21:27.508 "state": "configuring", 00:21:27.508 "raid_level": "concat", 00:21:27.508 "superblock": true, 00:21:27.508 "num_base_bdevs": 3, 00:21:27.508 "num_base_bdevs_discovered": 1, 00:21:27.508 "num_base_bdevs_operational": 3, 00:21:27.508 "base_bdevs_list": [ 00:21:27.508 { 00:21:27.508 "name": "pt1", 00:21:27.508 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:27.508 "is_configured": true, 00:21:27.508 "data_offset": 2048, 00:21:27.508 "data_size": 63488 00:21:27.508 }, 00:21:27.508 { 00:21:27.508 "name": null, 00:21:27.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:27.508 "is_configured": false, 00:21:27.508 "data_offset": 2048, 00:21:27.508 "data_size": 63488 00:21:27.508 }, 00:21:27.508 { 00:21:27.508 "name": null, 00:21:27.508 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:27.508 "is_configured": false, 00:21:27.508 "data_offset": 2048, 00:21:27.508 "data_size": 63488 00:21:27.508 } 00:21:27.508 ] 00:21:27.508 }' 00:21:27.508 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.508 00:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.076 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:21:28.076 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:28.076 [2024-07-25 00:47:50.714956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:28.076 [2024-07-25 00:47:50.715231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.076 [2024-07-25 00:47:50.715308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:28.076 [2024-07-25 00:47:50.715409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.076 [2024-07-25 00:47:50.715982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.076 [2024-07-25 00:47:50.716129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:28.076 [2024-07-25 00:47:50.716325] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:28.076 [2024-07-25 00:47:50.716476] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:28.076 pt2 00:21:28.336 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:28.336 [2024-07-25 00:47:50.967146] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:28.336 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:21:28.336 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:28.336 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:28.336 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:28.336 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:28.336 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:28.336 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:28.336 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:28.336 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:28.336 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:28.595 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.595 00:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.854 00:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:28.854 "name": "raid_bdev1", 00:21:28.854 "uuid": "b3c83b7f-3d17-4e42-aac0-494a1124b897", 00:21:28.854 "strip_size_kb": 64, 00:21:28.854 "state": "configuring", 00:21:28.854 "raid_level": "concat", 00:21:28.854 "superblock": true, 00:21:28.854 "num_base_bdevs": 3, 00:21:28.854 "num_base_bdevs_discovered": 1, 00:21:28.854 "num_base_bdevs_operational": 3, 00:21:28.854 "base_bdevs_list": [ 00:21:28.854 { 00:21:28.854 "name": "pt1", 00:21:28.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:28.854 "is_configured": true, 00:21:28.854 "data_offset": 2048, 00:21:28.854 "data_size": 63488 00:21:28.854 }, 00:21:28.854 { 00:21:28.854 "name": null, 00:21:28.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:28.854 "is_configured": false, 00:21:28.854 "data_offset": 2048, 00:21:28.854 "data_size": 63488 00:21:28.854 }, 00:21:28.854 { 00:21:28.854 "name": null, 00:21:28.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:28.854 "is_configured": false, 00:21:28.855 "data_offset": 2048, 00:21:28.855 "data_size": 63488 00:21:28.855 } 00:21:28.855 ] 00:21:28.855 }' 00:21:28.855 00:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:28.855 00:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.423 00:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:29.423 00:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:29.423 00:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:29.682 [2024-07-25 00:47:52.159270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:29.682 [2024-07-25 00:47:52.159496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.682 [2024-07-25 00:47:52.159603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:29.682 [2024-07-25 00:47:52.159695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.682 [2024-07-25 00:47:52.160245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.682 [2024-07-25 00:47:52.160380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:29.682 [2024-07-25 00:47:52.160561] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:29.682 [2024-07-25 00:47:52.160650] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:29.682 pt2 00:21:29.683 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:29.683 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:29.683 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:29.942 [2024-07-25 00:47:52.427324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:29.942 [2024-07-25 00:47:52.427549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.942 [2024-07-25 00:47:52.427608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:29.942 [2024-07-25 00:47:52.427698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.942 [2024-07-25 00:47:52.428245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.942 [2024-07-25 00:47:52.428376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:29.942 [2024-07-25 00:47:52.428565] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:29.942 [2024-07-25 00:47:52.428611] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:29.942 [2024-07-25 00:47:52.428852] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:21:29.942 [2024-07-25 00:47:52.428942] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:29.942 [2024-07-25 00:47:52.429057] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:29.942 [2024-07-25 00:47:52.429425] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:21:29.942 [2024-07-25 00:47:52.429532] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:21:29.942 [2024-07-25 00:47:52.429760] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.942 pt3 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.942 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.202 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.202 "name": "raid_bdev1", 00:21:30.202 "uuid": "b3c83b7f-3d17-4e42-aac0-494a1124b897", 00:21:30.202 "strip_size_kb": 64, 00:21:30.202 "state": "online", 00:21:30.202 "raid_level": "concat", 00:21:30.202 "superblock": true, 00:21:30.202 "num_base_bdevs": 3, 00:21:30.202 "num_base_bdevs_discovered": 3, 00:21:30.202 "num_base_bdevs_operational": 3, 00:21:30.202 "base_bdevs_list": [ 00:21:30.202 { 00:21:30.202 "name": "pt1", 00:21:30.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:30.202 "is_configured": true, 00:21:30.202 "data_offset": 2048, 00:21:30.202 "data_size": 63488 00:21:30.202 }, 00:21:30.202 { 00:21:30.202 "name": "pt2", 00:21:30.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:30.202 "is_configured": true, 00:21:30.202 "data_offset": 2048, 00:21:30.202 "data_size": 63488 00:21:30.202 }, 00:21:30.202 { 00:21:30.202 "name": "pt3", 00:21:30.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:30.202 "is_configured": true, 00:21:30.202 "data_offset": 2048, 00:21:30.202 "data_size": 63488 00:21:30.202 } 00:21:30.202 ] 00:21:30.202 }' 00:21:30.202 00:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.202 00:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.770 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:30.770 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:30.770 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:30.770 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:30.770 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:30.770 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:30.770 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:30.770 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:31.028 [2024-07-25 00:47:53.563847] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:31.028 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:31.028 "name": "raid_bdev1", 00:21:31.028 "aliases": [ 00:21:31.028 "b3c83b7f-3d17-4e42-aac0-494a1124b897" 00:21:31.028 ], 00:21:31.028 "product_name": "Raid Volume", 00:21:31.028 "block_size": 512, 00:21:31.028 "num_blocks": 190464, 00:21:31.028 "uuid": "b3c83b7f-3d17-4e42-aac0-494a1124b897", 00:21:31.028 "assigned_rate_limits": { 00:21:31.028 "rw_ios_per_sec": 0, 00:21:31.028 "rw_mbytes_per_sec": 0, 00:21:31.028 "r_mbytes_per_sec": 0, 00:21:31.028 "w_mbytes_per_sec": 0 00:21:31.028 }, 00:21:31.028 "claimed": false, 00:21:31.028 "zoned": false, 00:21:31.028 "supported_io_types": { 00:21:31.028 "read": true, 00:21:31.028 "write": true, 00:21:31.028 "unmap": true, 00:21:31.028 "flush": true, 00:21:31.028 "reset": true, 00:21:31.028 "nvme_admin": false, 00:21:31.028 "nvme_io": false, 00:21:31.028 "nvme_io_md": false, 00:21:31.028 "write_zeroes": true, 00:21:31.028 "zcopy": false, 00:21:31.028 "get_zone_info": false, 00:21:31.028 "zone_management": false, 00:21:31.028 "zone_append": false, 00:21:31.028 "compare": false, 00:21:31.028 "compare_and_write": false, 00:21:31.028 "abort": false, 00:21:31.028 "seek_hole": false, 00:21:31.028 "seek_data": false, 00:21:31.028 "copy": false, 00:21:31.028 "nvme_iov_md": false 00:21:31.028 }, 00:21:31.028 "memory_domains": [ 00:21:31.028 { 00:21:31.028 "dma_device_id": "system", 00:21:31.028 "dma_device_type": 1 00:21:31.028 }, 00:21:31.028 { 00:21:31.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.028 "dma_device_type": 2 00:21:31.028 }, 00:21:31.028 { 00:21:31.028 "dma_device_id": "system", 00:21:31.028 "dma_device_type": 1 00:21:31.028 }, 00:21:31.028 { 00:21:31.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.028 "dma_device_type": 2 00:21:31.028 }, 00:21:31.028 { 00:21:31.028 "dma_device_id": "system", 00:21:31.028 "dma_device_type": 1 00:21:31.028 }, 00:21:31.028 { 00:21:31.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.028 "dma_device_type": 2 00:21:31.028 } 00:21:31.028 ], 00:21:31.028 "driver_specific": { 00:21:31.028 "raid": { 00:21:31.028 "uuid": "b3c83b7f-3d17-4e42-aac0-494a1124b897", 00:21:31.028 "strip_size_kb": 64, 00:21:31.028 "state": "online", 00:21:31.028 "raid_level": "concat", 00:21:31.028 "superblock": true, 00:21:31.028 "num_base_bdevs": 3, 00:21:31.028 "num_base_bdevs_discovered": 3, 00:21:31.028 "num_base_bdevs_operational": 3, 00:21:31.028 "base_bdevs_list": [ 00:21:31.028 { 00:21:31.028 "name": "pt1", 00:21:31.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:31.028 "is_configured": true, 00:21:31.028 "data_offset": 2048, 00:21:31.028 "data_size": 63488 00:21:31.028 }, 00:21:31.028 { 00:21:31.028 "name": "pt2", 00:21:31.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:31.028 "is_configured": true, 00:21:31.028 "data_offset": 2048, 00:21:31.028 "data_size": 63488 00:21:31.028 }, 00:21:31.028 { 00:21:31.028 "name": "pt3", 00:21:31.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:31.028 "is_configured": true, 00:21:31.028 "data_offset": 2048, 00:21:31.028 "data_size": 63488 00:21:31.028 } 00:21:31.028 ] 00:21:31.028 } 00:21:31.028 } 00:21:31.028 }' 00:21:31.028 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:31.028 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:31.028 pt2 00:21:31.028 pt3' 00:21:31.028 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:31.028 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:31.028 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:31.286 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:31.286 "name": "pt1", 00:21:31.286 "aliases": [ 00:21:31.286 "00000000-0000-0000-0000-000000000001" 00:21:31.286 ], 00:21:31.286 "product_name": "passthru", 00:21:31.286 "block_size": 512, 00:21:31.286 "num_blocks": 65536, 00:21:31.286 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:31.286 "assigned_rate_limits": { 00:21:31.286 "rw_ios_per_sec": 0, 00:21:31.286 "rw_mbytes_per_sec": 0, 00:21:31.286 "r_mbytes_per_sec": 0, 00:21:31.286 "w_mbytes_per_sec": 0 00:21:31.286 }, 00:21:31.286 "claimed": true, 00:21:31.286 "claim_type": "exclusive_write", 00:21:31.286 "zoned": false, 00:21:31.286 "supported_io_types": { 00:21:31.286 "read": true, 00:21:31.286 "write": true, 00:21:31.286 "unmap": true, 00:21:31.286 "flush": true, 00:21:31.286 "reset": true, 00:21:31.286 "nvme_admin": false, 00:21:31.286 "nvme_io": false, 00:21:31.286 "nvme_io_md": false, 00:21:31.286 "write_zeroes": true, 00:21:31.286 "zcopy": true, 00:21:31.286 "get_zone_info": false, 00:21:31.286 "zone_management": false, 00:21:31.286 "zone_append": false, 00:21:31.286 "compare": false, 00:21:31.286 "compare_and_write": false, 00:21:31.286 "abort": true, 00:21:31.286 "seek_hole": false, 00:21:31.286 "seek_data": false, 00:21:31.286 "copy": true, 00:21:31.286 "nvme_iov_md": false 00:21:31.286 }, 00:21:31.286 "memory_domains": [ 00:21:31.286 { 00:21:31.286 "dma_device_id": "system", 00:21:31.286 "dma_device_type": 1 00:21:31.286 }, 00:21:31.286 { 00:21:31.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.286 "dma_device_type": 2 00:21:31.286 } 00:21:31.286 ], 00:21:31.286 "driver_specific": { 00:21:31.286 "passthru": { 00:21:31.286 "name": "pt1", 00:21:31.286 "base_bdev_name": "malloc1" 00:21:31.286 } 00:21:31.286 } 00:21:31.286 }' 00:21:31.286 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:31.286 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:31.544 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:31.544 00:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:31.544 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:31.544 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:31.544 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:31.544 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:31.544 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:31.544 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:31.544 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:31.802 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:31.803 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:31.803 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:31.803 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:31.803 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:31.803 "name": "pt2", 00:21:31.803 "aliases": [ 00:21:31.803 "00000000-0000-0000-0000-000000000002" 00:21:31.803 ], 00:21:31.803 "product_name": "passthru", 00:21:31.803 "block_size": 512, 00:21:31.803 "num_blocks": 65536, 00:21:31.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:31.803 "assigned_rate_limits": { 00:21:31.803 "rw_ios_per_sec": 0, 00:21:31.803 "rw_mbytes_per_sec": 0, 00:21:31.803 "r_mbytes_per_sec": 0, 00:21:31.803 "w_mbytes_per_sec": 0 00:21:31.803 }, 00:21:31.803 "claimed": true, 00:21:31.803 "claim_type": "exclusive_write", 00:21:31.803 "zoned": false, 00:21:31.803 "supported_io_types": { 00:21:31.803 "read": true, 00:21:31.803 "write": true, 00:21:31.803 "unmap": true, 00:21:31.803 "flush": true, 00:21:31.803 "reset": true, 00:21:31.803 "nvme_admin": false, 00:21:31.803 "nvme_io": false, 00:21:31.803 "nvme_io_md": false, 00:21:31.803 "write_zeroes": true, 00:21:31.803 "zcopy": true, 00:21:31.803 "get_zone_info": false, 00:21:31.803 "zone_management": false, 00:21:31.803 "zone_append": false, 00:21:31.803 "compare": false, 00:21:31.803 "compare_and_write": false, 00:21:31.803 "abort": true, 00:21:31.803 "seek_hole": false, 00:21:31.803 "seek_data": false, 00:21:31.803 "copy": true, 00:21:31.803 "nvme_iov_md": false 00:21:31.803 }, 00:21:31.803 "memory_domains": [ 00:21:31.803 { 00:21:31.803 "dma_device_id": "system", 00:21:31.803 "dma_device_type": 1 00:21:31.803 }, 00:21:31.803 { 00:21:31.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.803 "dma_device_type": 2 00:21:31.803 } 00:21:31.803 ], 00:21:31.803 "driver_specific": { 00:21:31.803 "passthru": { 00:21:31.803 "name": "pt2", 00:21:31.803 "base_bdev_name": "malloc2" 00:21:31.803 } 00:21:31.803 } 00:21:31.803 }' 00:21:31.803 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:31.803 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:32.061 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:32.061 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.061 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.061 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:32.061 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:32.062 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:32.062 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:32.062 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:32.062 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:32.320 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:32.320 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:32.320 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:32.320 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:32.579 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:32.579 "name": "pt3", 00:21:32.579 "aliases": [ 00:21:32.579 "00000000-0000-0000-0000-000000000003" 00:21:32.579 ], 00:21:32.579 "product_name": "passthru", 00:21:32.579 "block_size": 512, 00:21:32.579 "num_blocks": 65536, 00:21:32.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:32.579 "assigned_rate_limits": { 00:21:32.579 "rw_ios_per_sec": 0, 00:21:32.579 "rw_mbytes_per_sec": 0, 00:21:32.579 "r_mbytes_per_sec": 0, 00:21:32.579 "w_mbytes_per_sec": 0 00:21:32.579 }, 00:21:32.579 "claimed": true, 00:21:32.579 "claim_type": "exclusive_write", 00:21:32.579 "zoned": false, 00:21:32.579 "supported_io_types": { 00:21:32.579 "read": true, 00:21:32.579 "write": true, 00:21:32.579 "unmap": true, 00:21:32.579 "flush": true, 00:21:32.579 "reset": true, 00:21:32.579 "nvme_admin": false, 00:21:32.579 "nvme_io": false, 00:21:32.579 "nvme_io_md": false, 00:21:32.579 "write_zeroes": true, 00:21:32.579 "zcopy": true, 00:21:32.579 "get_zone_info": false, 00:21:32.579 "zone_management": false, 00:21:32.579 "zone_append": false, 00:21:32.579 "compare": false, 00:21:32.579 "compare_and_write": false, 00:21:32.579 "abort": true, 00:21:32.579 "seek_hole": false, 00:21:32.579 "seek_data": false, 00:21:32.579 "copy": true, 00:21:32.579 "nvme_iov_md": false 00:21:32.579 }, 00:21:32.579 "memory_domains": [ 00:21:32.579 { 00:21:32.579 "dma_device_id": "system", 00:21:32.579 "dma_device_type": 1 00:21:32.579 }, 00:21:32.579 { 00:21:32.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.579 "dma_device_type": 2 00:21:32.579 } 00:21:32.579 ], 00:21:32.579 "driver_specific": { 00:21:32.579 "passthru": { 00:21:32.579 "name": "pt3", 00:21:32.579 "base_bdev_name": "malloc3" 00:21:32.579 } 00:21:32.579 } 00:21:32.579 }' 00:21:32.579 00:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:32.579 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:32.579 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:32.579 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.579 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.579 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:32.579 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:32.579 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:32.838 [2024-07-25 00:47:55.456157] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' b3c83b7f-3d17-4e42-aac0-494a1124b897 '!=' b3c83b7f-3d17-4e42-aac0-494a1124b897 ']' 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 130994 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 130994 ']' 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 130994 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.838 00:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130994 00:21:33.097 00:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:33.097 00:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:33.097 00:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130994' 00:21:33.097 killing process with pid 130994 00:21:33.097 00:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 130994 00:21:33.097 [2024-07-25 00:47:55.509194] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:33.097 00:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 130994 00:21:33.097 [2024-07-25 00:47:55.509405] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:33.097 [2024-07-25 00:47:55.509482] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:33.097 [2024-07-25 00:47:55.509492] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:21:33.356 [2024-07-25 00:47:55.843502] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:34.734 ************************************ 00:21:34.734 END TEST raid_superblock_test 00:21:34.734 ************************************ 00:21:34.734 00:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:21:34.734 00:21:34.734 real 0m15.361s 00:21:34.734 user 0m26.493s 00:21:34.734 sys 0m2.196s 00:21:34.734 00:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:34.734 00:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.994 00:47:57 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:21:34.994 00:47:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:34.994 00:47:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.994 00:47:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:34.994 ************************************ 00:21:34.994 START TEST raid_read_error_test 00:21:34.994 ************************************ 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.rbhxiMBP39 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=131482 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 131482 /var/tmp/spdk-raid.sock 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 131482 ']' 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:34.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.994 00:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.994 [2024-07-25 00:47:57.541631] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:21:34.994 [2024-07-25 00:47:57.541848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131482 ] 00:21:35.254 [2024-07-25 00:47:57.732252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.514 [2024-07-25 00:47:58.021200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.772 [2024-07-25 00:47:58.225708] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.030 00:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.030 00:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:36.030 00:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:36.031 00:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:36.289 BaseBdev1_malloc 00:21:36.289 00:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:36.548 true 00:21:36.548 00:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:36.548 [2024-07-25 00:47:59.120929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:36.548 [2024-07-25 00:47:59.121162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.548 [2024-07-25 00:47:59.121231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:21:36.548 [2024-07-25 00:47:59.121343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.548 [2024-07-25 00:47:59.123684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.548 [2024-07-25 00:47:59.123850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:36.548 BaseBdev1 00:21:36.549 00:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:36.549 00:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:36.807 BaseBdev2_malloc 00:21:36.807 00:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:37.066 true 00:21:37.066 00:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:37.325 [2024-07-25 00:47:59.723742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:37.325 [2024-07-25 00:47:59.723956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.325 [2024-07-25 00:47:59.724028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:37.325 [2024-07-25 00:47:59.724121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.325 [2024-07-25 00:47:59.726437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.325 [2024-07-25 00:47:59.726593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:37.325 BaseBdev2 00:21:37.325 00:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:37.325 00:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:37.585 BaseBdev3_malloc 00:21:37.585 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:37.585 true 00:21:37.585 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:37.843 [2024-07-25 00:48:00.354170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:37.843 [2024-07-25 00:48:00.354437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.843 [2024-07-25 00:48:00.354508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:37.843 [2024-07-25 00:48:00.354602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.843 [2024-07-25 00:48:00.356835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.843 [2024-07-25 00:48:00.357004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:37.843 BaseBdev3 00:21:37.843 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:38.102 [2024-07-25 00:48:00.526292] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.102 [2024-07-25 00:48:00.528357] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.102 [2024-07-25 00:48:00.528558] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:38.102 [2024-07-25 00:48:00.528827] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:21:38.102 [2024-07-25 00:48:00.528872] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:38.102 [2024-07-25 00:48:00.529066] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:38.102 [2024-07-25 00:48:00.529495] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:21:38.102 [2024-07-25 00:48:00.529600] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:21:38.102 [2024-07-25 00:48:00.529865] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.102 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.359 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.359 "name": "raid_bdev1", 00:21:38.359 "uuid": "9b4d3df5-1f16-4895-9ae0-ef7e7dd898f2", 00:21:38.359 "strip_size_kb": 64, 00:21:38.359 "state": "online", 00:21:38.359 "raid_level": "concat", 00:21:38.359 "superblock": true, 00:21:38.359 "num_base_bdevs": 3, 00:21:38.359 "num_base_bdevs_discovered": 3, 00:21:38.359 "num_base_bdevs_operational": 3, 00:21:38.359 "base_bdevs_list": [ 00:21:38.359 { 00:21:38.359 "name": "BaseBdev1", 00:21:38.359 "uuid": "b44c5054-9a60-505c-936d-7c9b0ecbb71f", 00:21:38.359 "is_configured": true, 00:21:38.359 "data_offset": 2048, 00:21:38.359 "data_size": 63488 00:21:38.359 }, 00:21:38.359 { 00:21:38.359 "name": "BaseBdev2", 00:21:38.359 "uuid": "d7356065-e9ce-5382-ad95-59438e249a01", 00:21:38.359 "is_configured": true, 00:21:38.359 "data_offset": 2048, 00:21:38.359 "data_size": 63488 00:21:38.359 }, 00:21:38.359 { 00:21:38.359 "name": "BaseBdev3", 00:21:38.359 "uuid": "9e38e870-2c6c-5def-b19b-19f6654be9b5", 00:21:38.359 "is_configured": true, 00:21:38.359 "data_offset": 2048, 00:21:38.359 "data_size": 63488 00:21:38.359 } 00:21:38.359 ] 00:21:38.359 }' 00:21:38.359 00:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.359 00:48:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.924 00:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:38.924 00:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:38.924 [2024-07-25 00:48:01.372277] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:39.858 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.117 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.117 "name": "raid_bdev1", 00:21:40.117 "uuid": "9b4d3df5-1f16-4895-9ae0-ef7e7dd898f2", 00:21:40.117 "strip_size_kb": 64, 00:21:40.117 "state": "online", 00:21:40.117 "raid_level": "concat", 00:21:40.117 "superblock": true, 00:21:40.117 "num_base_bdevs": 3, 00:21:40.117 "num_base_bdevs_discovered": 3, 00:21:40.117 "num_base_bdevs_operational": 3, 00:21:40.117 "base_bdevs_list": [ 00:21:40.117 { 00:21:40.117 "name": "BaseBdev1", 00:21:40.117 "uuid": "b44c5054-9a60-505c-936d-7c9b0ecbb71f", 00:21:40.117 "is_configured": true, 00:21:40.117 "data_offset": 2048, 00:21:40.117 "data_size": 63488 00:21:40.117 }, 00:21:40.117 { 00:21:40.117 "name": "BaseBdev2", 00:21:40.118 "uuid": "d7356065-e9ce-5382-ad95-59438e249a01", 00:21:40.118 "is_configured": true, 00:21:40.118 "data_offset": 2048, 00:21:40.118 "data_size": 63488 00:21:40.118 }, 00:21:40.118 { 00:21:40.118 "name": "BaseBdev3", 00:21:40.118 "uuid": "9e38e870-2c6c-5def-b19b-19f6654be9b5", 00:21:40.118 "is_configured": true, 00:21:40.118 "data_offset": 2048, 00:21:40.118 "data_size": 63488 00:21:40.118 } 00:21:40.118 ] 00:21:40.118 }' 00:21:40.118 00:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.118 00:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.686 00:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:40.945 [2024-07-25 00:48:03.437493] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.945 [2024-07-25 00:48:03.437539] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.945 [2024-07-25 00:48:03.440031] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.945 [2024-07-25 00:48:03.440076] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.945 [2024-07-25 00:48:03.440105] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.945 [2024-07-25 00:48:03.440113] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:21:40.945 0 00:21:40.945 00:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 131482 00:21:40.945 00:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 131482 ']' 00:21:40.945 00:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 131482 00:21:40.945 00:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:21:40.945 00:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.945 00:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131482 00:21:40.945 killing process with pid 131482 00:21:40.945 00:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:40.945 00:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:40.945 00:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131482' 00:21:40.945 00:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 131482 00:21:40.945 00:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 131482 00:21:40.945 [2024-07-25 00:48:03.477793] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:41.204 [2024-07-25 00:48:03.679546] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:42.581 00:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:42.581 00:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.rbhxiMBP39 00:21:42.581 00:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:42.581 ************************************ 00:21:42.581 END TEST raid_read_error_test 00:21:42.581 ************************************ 00:21:42.581 00:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:21:42.581 00:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:21:42.582 00:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:42.582 00:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:42.582 00:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:21:42.582 00:21:42.582 real 0m7.501s 00:21:42.582 user 0m10.847s 00:21:42.582 sys 0m1.080s 00:21:42.582 00:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:42.582 00:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.582 00:48:04 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:21:42.582 00:48:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:42.582 00:48:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.582 00:48:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:42.582 ************************************ 00:21:42.582 START TEST raid_write_error_test 00:21:42.582 ************************************ 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.9Fe9g1ccI6 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=131684 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 131684 /var/tmp/spdk-raid.sock 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 131684 ']' 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:42.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.582 00:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.582 [2024-07-25 00:48:05.100665] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:21:42.582 [2024-07-25 00:48:05.100813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131684 ] 00:21:42.842 [2024-07-25 00:48:05.256353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.842 [2024-07-25 00:48:05.438982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.101 [2024-07-25 00:48:05.630053] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:43.360 00:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.360 00:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:43.360 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:43.360 00:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:43.620 BaseBdev1_malloc 00:21:43.881 00:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:43.881 true 00:21:43.882 00:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:44.141 [2024-07-25 00:48:06.669547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:44.141 [2024-07-25 00:48:06.669649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.141 [2024-07-25 00:48:06.669683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:21:44.141 [2024-07-25 00:48:06.669702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.141 [2024-07-25 00:48:06.671954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.141 [2024-07-25 00:48:06.672002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:44.141 BaseBdev1 00:21:44.141 00:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:44.141 00:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:44.400 BaseBdev2_malloc 00:21:44.400 00:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:44.659 true 00:21:44.659 00:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:44.918 [2024-07-25 00:48:07.436721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:44.918 [2024-07-25 00:48:07.436822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.918 [2024-07-25 00:48:07.436858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:44.918 [2024-07-25 00:48:07.436877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.918 [2024-07-25 00:48:07.438901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.918 [2024-07-25 00:48:07.438948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:44.918 BaseBdev2 00:21:44.918 00:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:44.918 00:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:45.177 BaseBdev3_malloc 00:21:45.177 00:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:45.436 true 00:21:45.436 00:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:45.696 [2024-07-25 00:48:08.125947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:45.696 [2024-07-25 00:48:08.126024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.696 [2024-07-25 00:48:08.126054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:45.696 [2024-07-25 00:48:08.126077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.696 [2024-07-25 00:48:08.128225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.696 [2024-07-25 00:48:08.128273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:45.696 BaseBdev3 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:45.696 [2024-07-25 00:48:08.302026] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:45.696 [2024-07-25 00:48:08.303912] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:45.696 [2024-07-25 00:48:08.303987] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:45.696 [2024-07-25 00:48:08.304172] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:21:45.696 [2024-07-25 00:48:08.304181] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:45.696 [2024-07-25 00:48:08.304272] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:45.696 [2024-07-25 00:48:08.304581] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:21:45.696 [2024-07-25 00:48:08.304600] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:21:45.696 [2024-07-25 00:48:08.304734] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.696 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.955 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.955 "name": "raid_bdev1", 00:21:45.955 "uuid": "09289131-0090-4983-a7af-14bdd883a4e5", 00:21:45.955 "strip_size_kb": 64, 00:21:45.955 "state": "online", 00:21:45.955 "raid_level": "concat", 00:21:45.955 "superblock": true, 00:21:45.955 "num_base_bdevs": 3, 00:21:45.955 "num_base_bdevs_discovered": 3, 00:21:45.955 "num_base_bdevs_operational": 3, 00:21:45.955 "base_bdevs_list": [ 00:21:45.955 { 00:21:45.955 "name": "BaseBdev1", 00:21:45.955 "uuid": "f1251f03-b47b-5099-852d-eeb44c6b2473", 00:21:45.955 "is_configured": true, 00:21:45.955 "data_offset": 2048, 00:21:45.955 "data_size": 63488 00:21:45.955 }, 00:21:45.955 { 00:21:45.955 "name": "BaseBdev2", 00:21:45.955 "uuid": "8573d07e-c276-57d3-94c9-8fa6f60d2460", 00:21:45.955 "is_configured": true, 00:21:45.955 "data_offset": 2048, 00:21:45.955 "data_size": 63488 00:21:45.955 }, 00:21:45.955 { 00:21:45.955 "name": "BaseBdev3", 00:21:45.955 "uuid": "ac613998-43c6-5a5e-b3e7-75063bae30ac", 00:21:45.955 "is_configured": true, 00:21:45.955 "data_offset": 2048, 00:21:45.955 "data_size": 63488 00:21:45.955 } 00:21:45.955 ] 00:21:45.955 }' 00:21:45.955 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.955 00:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.531 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:46.531 00:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:46.531 [2024-07-25 00:48:09.079355] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:47.472 00:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:47.731 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:47.731 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:21:47.731 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:21:47.731 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:47.731 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:47.731 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:47.731 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:47.731 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:47.731 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:47.732 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.732 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.732 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.732 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.732 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.732 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.991 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.991 "name": "raid_bdev1", 00:21:47.991 "uuid": "09289131-0090-4983-a7af-14bdd883a4e5", 00:21:47.991 "strip_size_kb": 64, 00:21:47.991 "state": "online", 00:21:47.991 "raid_level": "concat", 00:21:47.991 "superblock": true, 00:21:47.991 "num_base_bdevs": 3, 00:21:47.991 "num_base_bdevs_discovered": 3, 00:21:47.991 "num_base_bdevs_operational": 3, 00:21:47.991 "base_bdevs_list": [ 00:21:47.991 { 00:21:47.991 "name": "BaseBdev1", 00:21:47.991 "uuid": "f1251f03-b47b-5099-852d-eeb44c6b2473", 00:21:47.991 "is_configured": true, 00:21:47.991 "data_offset": 2048, 00:21:47.991 "data_size": 63488 00:21:47.991 }, 00:21:47.991 { 00:21:47.991 "name": "BaseBdev2", 00:21:47.991 "uuid": "8573d07e-c276-57d3-94c9-8fa6f60d2460", 00:21:47.991 "is_configured": true, 00:21:47.991 "data_offset": 2048, 00:21:47.991 "data_size": 63488 00:21:47.991 }, 00:21:47.991 { 00:21:47.991 "name": "BaseBdev3", 00:21:47.991 "uuid": "ac613998-43c6-5a5e-b3e7-75063bae30ac", 00:21:47.991 "is_configured": true, 00:21:47.991 "data_offset": 2048, 00:21:47.991 "data_size": 63488 00:21:47.991 } 00:21:47.991 ] 00:21:47.991 }' 00:21:47.991 00:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.991 00:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.559 00:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:48.827 [2024-07-25 00:48:11.245808] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:48.827 [2024-07-25 00:48:11.245854] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:48.827 [2024-07-25 00:48:11.248145] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:48.827 [2024-07-25 00:48:11.248189] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.827 [2024-07-25 00:48:11.248218] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:48.827 [2024-07-25 00:48:11.248227] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:21:48.827 0 00:21:48.827 00:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 131684 00:21:48.827 00:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 131684 ']' 00:21:48.827 00:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 131684 00:21:48.827 00:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:21:48.827 00:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.827 00:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131684 00:21:48.827 00:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:48.827 00:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:48.827 00:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131684' 00:21:48.827 killing process with pid 131684 00:21:48.827 00:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 131684 00:21:48.827 [2024-07-25 00:48:11.299520] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:48.827 00:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 131684 00:21:49.089 [2024-07-25 00:48:11.506589] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:50.467 00:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.9Fe9g1ccI6 00:21:50.467 00:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:50.467 00:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:50.467 ************************************ 00:21:50.467 END TEST raid_write_error_test 00:21:50.467 ************************************ 00:21:50.467 00:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:21:50.467 00:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:21:50.467 00:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:50.467 00:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:50.467 00:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:21:50.467 00:21:50.467 real 0m7.725s 00:21:50.467 user 0m11.308s 00:21:50.467 sys 0m1.131s 00:21:50.467 00:48:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.467 00:48:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.467 00:48:12 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:21:50.467 00:48:12 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:21:50.467 00:48:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:50.467 00:48:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.467 00:48:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:50.467 ************************************ 00:21:50.467 START TEST raid_state_function_test 00:21:50.467 ************************************ 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=131880 00:21:50.467 Process raid pid: 131880 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 131880' 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 131880 /var/tmp/spdk-raid.sock 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 131880 ']' 00:21:50.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.467 00:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.467 [2024-07-25 00:48:12.927025] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:21:50.467 [2024-07-25 00:48:12.927344] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.467 [2024-07-25 00:48:13.117163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.726 [2024-07-25 00:48:13.311452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.983 [2024-07-25 00:48:13.508223] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:51.241 00:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.241 00:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:21:51.241 00:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:51.500 [2024-07-25 00:48:14.113788] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:51.500 [2024-07-25 00:48:14.113864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:51.500 [2024-07-25 00:48:14.113875] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:51.500 [2024-07-25 00:48:14.113900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:51.500 [2024-07-25 00:48:14.113908] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:51.500 [2024-07-25 00:48:14.113924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.500 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.759 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.759 "name": "Existed_Raid", 00:21:51.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.759 "strip_size_kb": 0, 00:21:51.759 "state": "configuring", 00:21:51.759 "raid_level": "raid1", 00:21:51.759 "superblock": false, 00:21:51.759 "num_base_bdevs": 3, 00:21:51.759 "num_base_bdevs_discovered": 0, 00:21:51.759 "num_base_bdevs_operational": 3, 00:21:51.759 "base_bdevs_list": [ 00:21:51.759 { 00:21:51.759 "name": "BaseBdev1", 00:21:51.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.759 "is_configured": false, 00:21:51.759 "data_offset": 0, 00:21:51.759 "data_size": 0 00:21:51.759 }, 00:21:51.759 { 00:21:51.759 "name": "BaseBdev2", 00:21:51.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.759 "is_configured": false, 00:21:51.759 "data_offset": 0, 00:21:51.759 "data_size": 0 00:21:51.759 }, 00:21:51.759 { 00:21:51.759 "name": "BaseBdev3", 00:21:51.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.759 "is_configured": false, 00:21:51.759 "data_offset": 0, 00:21:51.759 "data_size": 0 00:21:51.759 } 00:21:51.759 ] 00:21:51.759 }' 00:21:51.759 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.759 00:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.326 00:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:52.584 [2024-07-25 00:48:15.129845] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:52.585 [2024-07-25 00:48:15.129877] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:52.585 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:52.843 [2024-07-25 00:48:15.301884] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:52.843 [2024-07-25 00:48:15.301937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:52.843 [2024-07-25 00:48:15.301946] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:52.843 [2024-07-25 00:48:15.301961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:52.843 [2024-07-25 00:48:15.301968] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:52.843 [2024-07-25 00:48:15.301988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:52.843 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:53.102 [2024-07-25 00:48:15.574182] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:53.102 BaseBdev1 00:21:53.102 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:53.102 00:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:53.102 00:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:53.102 00:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:53.102 00:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:53.102 00:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:53.102 00:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:53.361 [ 00:21:53.361 { 00:21:53.361 "name": "BaseBdev1", 00:21:53.361 "aliases": [ 00:21:53.361 "5936452a-c446-4db0-ad16-9388fb4501fa" 00:21:53.361 ], 00:21:53.361 "product_name": "Malloc disk", 00:21:53.361 "block_size": 512, 00:21:53.361 "num_blocks": 65536, 00:21:53.361 "uuid": "5936452a-c446-4db0-ad16-9388fb4501fa", 00:21:53.361 "assigned_rate_limits": { 00:21:53.361 "rw_ios_per_sec": 0, 00:21:53.361 "rw_mbytes_per_sec": 0, 00:21:53.361 "r_mbytes_per_sec": 0, 00:21:53.361 "w_mbytes_per_sec": 0 00:21:53.361 }, 00:21:53.361 "claimed": true, 00:21:53.361 "claim_type": "exclusive_write", 00:21:53.361 "zoned": false, 00:21:53.361 "supported_io_types": { 00:21:53.361 "read": true, 00:21:53.361 "write": true, 00:21:53.361 "unmap": true, 00:21:53.361 "flush": true, 00:21:53.361 "reset": true, 00:21:53.361 "nvme_admin": false, 00:21:53.361 "nvme_io": false, 00:21:53.361 "nvme_io_md": false, 00:21:53.361 "write_zeroes": true, 00:21:53.361 "zcopy": true, 00:21:53.361 "get_zone_info": false, 00:21:53.361 "zone_management": false, 00:21:53.361 "zone_append": false, 00:21:53.361 "compare": false, 00:21:53.361 "compare_and_write": false, 00:21:53.361 "abort": true, 00:21:53.361 "seek_hole": false, 00:21:53.361 "seek_data": false, 00:21:53.361 "copy": true, 00:21:53.361 "nvme_iov_md": false 00:21:53.361 }, 00:21:53.361 "memory_domains": [ 00:21:53.361 { 00:21:53.361 "dma_device_id": "system", 00:21:53.361 "dma_device_type": 1 00:21:53.361 }, 00:21:53.361 { 00:21:53.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.361 "dma_device_type": 2 00:21:53.361 } 00:21:53.361 ], 00:21:53.361 "driver_specific": {} 00:21:53.361 } 00:21:53.361 ] 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.361 00:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.620 00:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.620 "name": "Existed_Raid", 00:21:53.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.620 "strip_size_kb": 0, 00:21:53.620 "state": "configuring", 00:21:53.620 "raid_level": "raid1", 00:21:53.620 "superblock": false, 00:21:53.620 "num_base_bdevs": 3, 00:21:53.620 "num_base_bdevs_discovered": 1, 00:21:53.620 "num_base_bdevs_operational": 3, 00:21:53.620 "base_bdevs_list": [ 00:21:53.620 { 00:21:53.620 "name": "BaseBdev1", 00:21:53.620 "uuid": "5936452a-c446-4db0-ad16-9388fb4501fa", 00:21:53.620 "is_configured": true, 00:21:53.620 "data_offset": 0, 00:21:53.620 "data_size": 65536 00:21:53.620 }, 00:21:53.620 { 00:21:53.620 "name": "BaseBdev2", 00:21:53.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.620 "is_configured": false, 00:21:53.620 "data_offset": 0, 00:21:53.620 "data_size": 0 00:21:53.620 }, 00:21:53.620 { 00:21:53.620 "name": "BaseBdev3", 00:21:53.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.620 "is_configured": false, 00:21:53.620 "data_offset": 0, 00:21:53.620 "data_size": 0 00:21:53.620 } 00:21:53.620 ] 00:21:53.620 }' 00:21:53.620 00:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.620 00:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.187 00:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:54.446 [2024-07-25 00:48:16.866476] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:54.446 [2024-07-25 00:48:16.866519] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:21:54.446 00:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:54.446 [2024-07-25 00:48:17.038516] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.446 [2024-07-25 00:48:17.040346] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:54.446 [2024-07-25 00:48:17.040413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:54.446 [2024-07-25 00:48:17.040422] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:54.446 [2024-07-25 00:48:17.040457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.446 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.705 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.705 "name": "Existed_Raid", 00:21:54.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.705 "strip_size_kb": 0, 00:21:54.705 "state": "configuring", 00:21:54.705 "raid_level": "raid1", 00:21:54.705 "superblock": false, 00:21:54.705 "num_base_bdevs": 3, 00:21:54.705 "num_base_bdevs_discovered": 1, 00:21:54.705 "num_base_bdevs_operational": 3, 00:21:54.705 "base_bdevs_list": [ 00:21:54.705 { 00:21:54.705 "name": "BaseBdev1", 00:21:54.705 "uuid": "5936452a-c446-4db0-ad16-9388fb4501fa", 00:21:54.705 "is_configured": true, 00:21:54.705 "data_offset": 0, 00:21:54.705 "data_size": 65536 00:21:54.705 }, 00:21:54.705 { 00:21:54.705 "name": "BaseBdev2", 00:21:54.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.705 "is_configured": false, 00:21:54.705 "data_offset": 0, 00:21:54.705 "data_size": 0 00:21:54.705 }, 00:21:54.705 { 00:21:54.705 "name": "BaseBdev3", 00:21:54.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.705 "is_configured": false, 00:21:54.705 "data_offset": 0, 00:21:54.705 "data_size": 0 00:21:54.705 } 00:21:54.705 ] 00:21:54.705 }' 00:21:54.705 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.705 00:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.272 00:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:55.531 [2024-07-25 00:48:18.153499] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:55.531 BaseBdev2 00:21:55.531 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:55.531 00:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:55.531 00:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:55.531 00:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:55.531 00:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:55.531 00:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:55.531 00:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:55.789 00:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:56.048 [ 00:21:56.048 { 00:21:56.048 "name": "BaseBdev2", 00:21:56.048 "aliases": [ 00:21:56.048 "b347a2e3-0ede-4f58-bca4-eff2d262c51d" 00:21:56.048 ], 00:21:56.048 "product_name": "Malloc disk", 00:21:56.048 "block_size": 512, 00:21:56.048 "num_blocks": 65536, 00:21:56.048 "uuid": "b347a2e3-0ede-4f58-bca4-eff2d262c51d", 00:21:56.048 "assigned_rate_limits": { 00:21:56.048 "rw_ios_per_sec": 0, 00:21:56.048 "rw_mbytes_per_sec": 0, 00:21:56.048 "r_mbytes_per_sec": 0, 00:21:56.048 "w_mbytes_per_sec": 0 00:21:56.048 }, 00:21:56.048 "claimed": true, 00:21:56.048 "claim_type": "exclusive_write", 00:21:56.048 "zoned": false, 00:21:56.048 "supported_io_types": { 00:21:56.048 "read": true, 00:21:56.048 "write": true, 00:21:56.048 "unmap": true, 00:21:56.048 "flush": true, 00:21:56.048 "reset": true, 00:21:56.048 "nvme_admin": false, 00:21:56.048 "nvme_io": false, 00:21:56.048 "nvme_io_md": false, 00:21:56.048 "write_zeroes": true, 00:21:56.048 "zcopy": true, 00:21:56.048 "get_zone_info": false, 00:21:56.048 "zone_management": false, 00:21:56.048 "zone_append": false, 00:21:56.048 "compare": false, 00:21:56.048 "compare_and_write": false, 00:21:56.048 "abort": true, 00:21:56.048 "seek_hole": false, 00:21:56.048 "seek_data": false, 00:21:56.048 "copy": true, 00:21:56.049 "nvme_iov_md": false 00:21:56.049 }, 00:21:56.049 "memory_domains": [ 00:21:56.049 { 00:21:56.049 "dma_device_id": "system", 00:21:56.049 "dma_device_type": 1 00:21:56.049 }, 00:21:56.049 { 00:21:56.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.049 "dma_device_type": 2 00:21:56.049 } 00:21:56.049 ], 00:21:56.049 "driver_specific": {} 00:21:56.049 } 00:21:56.049 ] 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.049 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.308 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.308 "name": "Existed_Raid", 00:21:56.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.308 "strip_size_kb": 0, 00:21:56.308 "state": "configuring", 00:21:56.308 "raid_level": "raid1", 00:21:56.308 "superblock": false, 00:21:56.308 "num_base_bdevs": 3, 00:21:56.308 "num_base_bdevs_discovered": 2, 00:21:56.308 "num_base_bdevs_operational": 3, 00:21:56.308 "base_bdevs_list": [ 00:21:56.308 { 00:21:56.308 "name": "BaseBdev1", 00:21:56.308 "uuid": "5936452a-c446-4db0-ad16-9388fb4501fa", 00:21:56.308 "is_configured": true, 00:21:56.308 "data_offset": 0, 00:21:56.308 "data_size": 65536 00:21:56.308 }, 00:21:56.308 { 00:21:56.308 "name": "BaseBdev2", 00:21:56.308 "uuid": "b347a2e3-0ede-4f58-bca4-eff2d262c51d", 00:21:56.308 "is_configured": true, 00:21:56.308 "data_offset": 0, 00:21:56.308 "data_size": 65536 00:21:56.308 }, 00:21:56.308 { 00:21:56.308 "name": "BaseBdev3", 00:21:56.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.308 "is_configured": false, 00:21:56.308 "data_offset": 0, 00:21:56.308 "data_size": 0 00:21:56.308 } 00:21:56.308 ] 00:21:56.308 }' 00:21:56.308 00:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.308 00:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.877 00:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:57.136 [2024-07-25 00:48:19.752545] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:57.136 [2024-07-25 00:48:19.752608] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:21:57.136 [2024-07-25 00:48:19.752617] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:57.136 [2024-07-25 00:48:19.752755] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:57.136 [2024-07-25 00:48:19.753126] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:21:57.136 [2024-07-25 00:48:19.753138] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:21:57.136 [2024-07-25 00:48:19.753374] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.136 BaseBdev3 00:21:57.136 00:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:57.136 00:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:57.136 00:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:57.136 00:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:57.136 00:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:57.136 00:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:57.136 00:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:57.395 00:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:57.655 [ 00:21:57.655 { 00:21:57.655 "name": "BaseBdev3", 00:21:57.655 "aliases": [ 00:21:57.655 "000ec0bc-d1be-4fab-8cc3-95bfcc81fa88" 00:21:57.655 ], 00:21:57.655 "product_name": "Malloc disk", 00:21:57.655 "block_size": 512, 00:21:57.655 "num_blocks": 65536, 00:21:57.655 "uuid": "000ec0bc-d1be-4fab-8cc3-95bfcc81fa88", 00:21:57.655 "assigned_rate_limits": { 00:21:57.655 "rw_ios_per_sec": 0, 00:21:57.655 "rw_mbytes_per_sec": 0, 00:21:57.655 "r_mbytes_per_sec": 0, 00:21:57.655 "w_mbytes_per_sec": 0 00:21:57.655 }, 00:21:57.655 "claimed": true, 00:21:57.655 "claim_type": "exclusive_write", 00:21:57.655 "zoned": false, 00:21:57.655 "supported_io_types": { 00:21:57.655 "read": true, 00:21:57.655 "write": true, 00:21:57.655 "unmap": true, 00:21:57.655 "flush": true, 00:21:57.655 "reset": true, 00:21:57.655 "nvme_admin": false, 00:21:57.655 "nvme_io": false, 00:21:57.655 "nvme_io_md": false, 00:21:57.655 "write_zeroes": true, 00:21:57.655 "zcopy": true, 00:21:57.655 "get_zone_info": false, 00:21:57.655 "zone_management": false, 00:21:57.655 "zone_append": false, 00:21:57.655 "compare": false, 00:21:57.655 "compare_and_write": false, 00:21:57.655 "abort": true, 00:21:57.655 "seek_hole": false, 00:21:57.655 "seek_data": false, 00:21:57.655 "copy": true, 00:21:57.655 "nvme_iov_md": false 00:21:57.655 }, 00:21:57.655 "memory_domains": [ 00:21:57.655 { 00:21:57.655 "dma_device_id": "system", 00:21:57.655 "dma_device_type": 1 00:21:57.655 }, 00:21:57.655 { 00:21:57.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.655 "dma_device_type": 2 00:21:57.655 } 00:21:57.655 ], 00:21:57.655 "driver_specific": {} 00:21:57.655 } 00:21:57.655 ] 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.655 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.915 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.915 "name": "Existed_Raid", 00:21:57.915 "uuid": "e43e9956-141e-440a-8130-8521b52da79e", 00:21:57.915 "strip_size_kb": 0, 00:21:57.915 "state": "online", 00:21:57.915 "raid_level": "raid1", 00:21:57.915 "superblock": false, 00:21:57.915 "num_base_bdevs": 3, 00:21:57.915 "num_base_bdevs_discovered": 3, 00:21:57.915 "num_base_bdevs_operational": 3, 00:21:57.915 "base_bdevs_list": [ 00:21:57.915 { 00:21:57.915 "name": "BaseBdev1", 00:21:57.915 "uuid": "5936452a-c446-4db0-ad16-9388fb4501fa", 00:21:57.915 "is_configured": true, 00:21:57.915 "data_offset": 0, 00:21:57.915 "data_size": 65536 00:21:57.915 }, 00:21:57.915 { 00:21:57.915 "name": "BaseBdev2", 00:21:57.915 "uuid": "b347a2e3-0ede-4f58-bca4-eff2d262c51d", 00:21:57.915 "is_configured": true, 00:21:57.915 "data_offset": 0, 00:21:57.915 "data_size": 65536 00:21:57.915 }, 00:21:57.915 { 00:21:57.915 "name": "BaseBdev3", 00:21:57.915 "uuid": "000ec0bc-d1be-4fab-8cc3-95bfcc81fa88", 00:21:57.915 "is_configured": true, 00:21:57.915 "data_offset": 0, 00:21:57.915 "data_size": 65536 00:21:57.915 } 00:21:57.915 ] 00:21:57.915 }' 00:21:57.915 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.915 00:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.483 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:58.483 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:58.483 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:58.483 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:58.483 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:58.483 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:58.483 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:58.483 00:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:58.743 [2024-07-25 00:48:21.169039] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:58.743 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:58.743 "name": "Existed_Raid", 00:21:58.743 "aliases": [ 00:21:58.743 "e43e9956-141e-440a-8130-8521b52da79e" 00:21:58.743 ], 00:21:58.743 "product_name": "Raid Volume", 00:21:58.743 "block_size": 512, 00:21:58.743 "num_blocks": 65536, 00:21:58.743 "uuid": "e43e9956-141e-440a-8130-8521b52da79e", 00:21:58.743 "assigned_rate_limits": { 00:21:58.743 "rw_ios_per_sec": 0, 00:21:58.743 "rw_mbytes_per_sec": 0, 00:21:58.743 "r_mbytes_per_sec": 0, 00:21:58.743 "w_mbytes_per_sec": 0 00:21:58.743 }, 00:21:58.743 "claimed": false, 00:21:58.743 "zoned": false, 00:21:58.743 "supported_io_types": { 00:21:58.743 "read": true, 00:21:58.743 "write": true, 00:21:58.743 "unmap": false, 00:21:58.743 "flush": false, 00:21:58.743 "reset": true, 00:21:58.743 "nvme_admin": false, 00:21:58.743 "nvme_io": false, 00:21:58.743 "nvme_io_md": false, 00:21:58.743 "write_zeroes": true, 00:21:58.743 "zcopy": false, 00:21:58.743 "get_zone_info": false, 00:21:58.743 "zone_management": false, 00:21:58.743 "zone_append": false, 00:21:58.743 "compare": false, 00:21:58.743 "compare_and_write": false, 00:21:58.743 "abort": false, 00:21:58.743 "seek_hole": false, 00:21:58.743 "seek_data": false, 00:21:58.743 "copy": false, 00:21:58.743 "nvme_iov_md": false 00:21:58.743 }, 00:21:58.743 "memory_domains": [ 00:21:58.743 { 00:21:58.743 "dma_device_id": "system", 00:21:58.743 "dma_device_type": 1 00:21:58.743 }, 00:21:58.743 { 00:21:58.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.743 "dma_device_type": 2 00:21:58.743 }, 00:21:58.743 { 00:21:58.743 "dma_device_id": "system", 00:21:58.743 "dma_device_type": 1 00:21:58.743 }, 00:21:58.743 { 00:21:58.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.743 "dma_device_type": 2 00:21:58.743 }, 00:21:58.743 { 00:21:58.743 "dma_device_id": "system", 00:21:58.743 "dma_device_type": 1 00:21:58.743 }, 00:21:58.743 { 00:21:58.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.743 "dma_device_type": 2 00:21:58.743 } 00:21:58.743 ], 00:21:58.743 "driver_specific": { 00:21:58.743 "raid": { 00:21:58.743 "uuid": "e43e9956-141e-440a-8130-8521b52da79e", 00:21:58.743 "strip_size_kb": 0, 00:21:58.743 "state": "online", 00:21:58.743 "raid_level": "raid1", 00:21:58.743 "superblock": false, 00:21:58.743 "num_base_bdevs": 3, 00:21:58.743 "num_base_bdevs_discovered": 3, 00:21:58.743 "num_base_bdevs_operational": 3, 00:21:58.743 "base_bdevs_list": [ 00:21:58.743 { 00:21:58.743 "name": "BaseBdev1", 00:21:58.743 "uuid": "5936452a-c446-4db0-ad16-9388fb4501fa", 00:21:58.743 "is_configured": true, 00:21:58.743 "data_offset": 0, 00:21:58.743 "data_size": 65536 00:21:58.743 }, 00:21:58.743 { 00:21:58.743 "name": "BaseBdev2", 00:21:58.743 "uuid": "b347a2e3-0ede-4f58-bca4-eff2d262c51d", 00:21:58.743 "is_configured": true, 00:21:58.743 "data_offset": 0, 00:21:58.743 "data_size": 65536 00:21:58.743 }, 00:21:58.743 { 00:21:58.743 "name": "BaseBdev3", 00:21:58.743 "uuid": "000ec0bc-d1be-4fab-8cc3-95bfcc81fa88", 00:21:58.743 "is_configured": true, 00:21:58.743 "data_offset": 0, 00:21:58.743 "data_size": 65536 00:21:58.743 } 00:21:58.743 ] 00:21:58.743 } 00:21:58.743 } 00:21:58.743 }' 00:21:58.743 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:58.743 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:58.743 BaseBdev2 00:21:58.743 BaseBdev3' 00:21:58.743 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:58.743 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:58.743 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:58.743 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:58.743 "name": "BaseBdev1", 00:21:58.743 "aliases": [ 00:21:58.743 "5936452a-c446-4db0-ad16-9388fb4501fa" 00:21:58.743 ], 00:21:58.743 "product_name": "Malloc disk", 00:21:58.743 "block_size": 512, 00:21:58.743 "num_blocks": 65536, 00:21:58.743 "uuid": "5936452a-c446-4db0-ad16-9388fb4501fa", 00:21:58.744 "assigned_rate_limits": { 00:21:58.744 "rw_ios_per_sec": 0, 00:21:58.744 "rw_mbytes_per_sec": 0, 00:21:58.744 "r_mbytes_per_sec": 0, 00:21:58.744 "w_mbytes_per_sec": 0 00:21:58.744 }, 00:21:58.744 "claimed": true, 00:21:58.744 "claim_type": "exclusive_write", 00:21:58.744 "zoned": false, 00:21:58.744 "supported_io_types": { 00:21:58.744 "read": true, 00:21:58.744 "write": true, 00:21:58.744 "unmap": true, 00:21:58.744 "flush": true, 00:21:58.744 "reset": true, 00:21:58.744 "nvme_admin": false, 00:21:58.744 "nvme_io": false, 00:21:58.744 "nvme_io_md": false, 00:21:58.744 "write_zeroes": true, 00:21:58.744 "zcopy": true, 00:21:58.744 "get_zone_info": false, 00:21:58.744 "zone_management": false, 00:21:58.744 "zone_append": false, 00:21:58.744 "compare": false, 00:21:58.744 "compare_and_write": false, 00:21:58.744 "abort": true, 00:21:58.744 "seek_hole": false, 00:21:58.744 "seek_data": false, 00:21:58.744 "copy": true, 00:21:58.744 "nvme_iov_md": false 00:21:58.744 }, 00:21:58.744 "memory_domains": [ 00:21:58.744 { 00:21:58.744 "dma_device_id": "system", 00:21:58.744 "dma_device_type": 1 00:21:58.744 }, 00:21:58.744 { 00:21:58.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.744 "dma_device_type": 2 00:21:58.744 } 00:21:58.744 ], 00:21:58.744 "driver_specific": {} 00:21:58.744 }' 00:21:58.744 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.003 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.003 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:59.003 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.003 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.003 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:59.003 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.003 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.262 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:59.262 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.262 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.262 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:59.262 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:59.262 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:59.262 00:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:59.522 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:59.522 "name": "BaseBdev2", 00:21:59.522 "aliases": [ 00:21:59.522 "b347a2e3-0ede-4f58-bca4-eff2d262c51d" 00:21:59.522 ], 00:21:59.522 "product_name": "Malloc disk", 00:21:59.522 "block_size": 512, 00:21:59.522 "num_blocks": 65536, 00:21:59.522 "uuid": "b347a2e3-0ede-4f58-bca4-eff2d262c51d", 00:21:59.522 "assigned_rate_limits": { 00:21:59.522 "rw_ios_per_sec": 0, 00:21:59.522 "rw_mbytes_per_sec": 0, 00:21:59.522 "r_mbytes_per_sec": 0, 00:21:59.522 "w_mbytes_per_sec": 0 00:21:59.522 }, 00:21:59.522 "claimed": true, 00:21:59.522 "claim_type": "exclusive_write", 00:21:59.522 "zoned": false, 00:21:59.522 "supported_io_types": { 00:21:59.522 "read": true, 00:21:59.522 "write": true, 00:21:59.522 "unmap": true, 00:21:59.522 "flush": true, 00:21:59.522 "reset": true, 00:21:59.522 "nvme_admin": false, 00:21:59.522 "nvme_io": false, 00:21:59.522 "nvme_io_md": false, 00:21:59.522 "write_zeroes": true, 00:21:59.522 "zcopy": true, 00:21:59.522 "get_zone_info": false, 00:21:59.522 "zone_management": false, 00:21:59.522 "zone_append": false, 00:21:59.522 "compare": false, 00:21:59.522 "compare_and_write": false, 00:21:59.522 "abort": true, 00:21:59.522 "seek_hole": false, 00:21:59.522 "seek_data": false, 00:21:59.522 "copy": true, 00:21:59.522 "nvme_iov_md": false 00:21:59.522 }, 00:21:59.522 "memory_domains": [ 00:21:59.522 { 00:21:59.522 "dma_device_id": "system", 00:21:59.522 "dma_device_type": 1 00:21:59.522 }, 00:21:59.522 { 00:21:59.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.522 "dma_device_type": 2 00:21:59.522 } 00:21:59.522 ], 00:21:59.522 "driver_specific": {} 00:21:59.522 }' 00:21:59.522 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.522 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.522 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:59.522 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.781 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.781 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:59.781 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.781 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.781 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:59.781 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.781 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.781 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:59.781 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:59.781 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:59.781 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:00.041 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:00.041 "name": "BaseBdev3", 00:22:00.041 "aliases": [ 00:22:00.041 "000ec0bc-d1be-4fab-8cc3-95bfcc81fa88" 00:22:00.041 ], 00:22:00.041 "product_name": "Malloc disk", 00:22:00.041 "block_size": 512, 00:22:00.041 "num_blocks": 65536, 00:22:00.041 "uuid": "000ec0bc-d1be-4fab-8cc3-95bfcc81fa88", 00:22:00.041 "assigned_rate_limits": { 00:22:00.041 "rw_ios_per_sec": 0, 00:22:00.041 "rw_mbytes_per_sec": 0, 00:22:00.041 "r_mbytes_per_sec": 0, 00:22:00.041 "w_mbytes_per_sec": 0 00:22:00.041 }, 00:22:00.041 "claimed": true, 00:22:00.041 "claim_type": "exclusive_write", 00:22:00.041 "zoned": false, 00:22:00.041 "supported_io_types": { 00:22:00.041 "read": true, 00:22:00.041 "write": true, 00:22:00.041 "unmap": true, 00:22:00.041 "flush": true, 00:22:00.041 "reset": true, 00:22:00.041 "nvme_admin": false, 00:22:00.041 "nvme_io": false, 00:22:00.041 "nvme_io_md": false, 00:22:00.041 "write_zeroes": true, 00:22:00.041 "zcopy": true, 00:22:00.041 "get_zone_info": false, 00:22:00.041 "zone_management": false, 00:22:00.041 "zone_append": false, 00:22:00.041 "compare": false, 00:22:00.041 "compare_and_write": false, 00:22:00.041 "abort": true, 00:22:00.041 "seek_hole": false, 00:22:00.041 "seek_data": false, 00:22:00.041 "copy": true, 00:22:00.041 "nvme_iov_md": false 00:22:00.041 }, 00:22:00.041 "memory_domains": [ 00:22:00.041 { 00:22:00.041 "dma_device_id": "system", 00:22:00.041 "dma_device_type": 1 00:22:00.041 }, 00:22:00.041 { 00:22:00.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.041 "dma_device_type": 2 00:22:00.041 } 00:22:00.041 ], 00:22:00.041 "driver_specific": {} 00:22:00.041 }' 00:22:00.041 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:00.041 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:00.041 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:00.041 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:00.041 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:00.041 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:00.300 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:00.300 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:00.300 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:00.300 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:00.300 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:00.300 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:00.300 00:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:00.560 [2024-07-25 00:48:23.093173] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.560 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.819 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.819 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.819 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.819 "name": "Existed_Raid", 00:22:00.819 "uuid": "e43e9956-141e-440a-8130-8521b52da79e", 00:22:00.819 "strip_size_kb": 0, 00:22:00.819 "state": "online", 00:22:00.819 "raid_level": "raid1", 00:22:00.819 "superblock": false, 00:22:00.819 "num_base_bdevs": 3, 00:22:00.819 "num_base_bdevs_discovered": 2, 00:22:00.820 "num_base_bdevs_operational": 2, 00:22:00.820 "base_bdevs_list": [ 00:22:00.820 { 00:22:00.820 "name": null, 00:22:00.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.820 "is_configured": false, 00:22:00.820 "data_offset": 0, 00:22:00.820 "data_size": 65536 00:22:00.820 }, 00:22:00.820 { 00:22:00.820 "name": "BaseBdev2", 00:22:00.820 "uuid": "b347a2e3-0ede-4f58-bca4-eff2d262c51d", 00:22:00.820 "is_configured": true, 00:22:00.820 "data_offset": 0, 00:22:00.820 "data_size": 65536 00:22:00.820 }, 00:22:00.820 { 00:22:00.820 "name": "BaseBdev3", 00:22:00.820 "uuid": "000ec0bc-d1be-4fab-8cc3-95bfcc81fa88", 00:22:00.820 "is_configured": true, 00:22:00.820 "data_offset": 0, 00:22:00.820 "data_size": 65536 00:22:00.820 } 00:22:00.820 ] 00:22:00.820 }' 00:22:00.820 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.820 00:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.388 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:01.389 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:01.389 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.389 00:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:01.647 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:01.647 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:01.648 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:01.906 [2024-07-25 00:48:24.340003] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:01.907 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:01.907 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:01.907 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.907 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:02.165 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:02.165 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:02.165 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:02.424 [2024-07-25 00:48:24.864051] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:02.424 [2024-07-25 00:48:24.864146] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.424 [2024-07-25 00:48:24.965799] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.424 [2024-07-25 00:48:24.965841] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.424 [2024-07-25 00:48:24.965851] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:22:02.424 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:02.424 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:02.424 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.424 00:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:02.683 00:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:02.683 00:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:02.683 00:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:22:02.683 00:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:02.683 00:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:02.683 00:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:02.941 BaseBdev2 00:22:02.941 00:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:02.941 00:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:02.941 00:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:02.941 00:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:02.941 00:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:02.941 00:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:02.941 00:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:03.200 00:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:03.459 [ 00:22:03.459 { 00:22:03.459 "name": "BaseBdev2", 00:22:03.459 "aliases": [ 00:22:03.459 "e252f665-8c43-4640-a80f-6cd596f7bd46" 00:22:03.459 ], 00:22:03.459 "product_name": "Malloc disk", 00:22:03.459 "block_size": 512, 00:22:03.459 "num_blocks": 65536, 00:22:03.459 "uuid": "e252f665-8c43-4640-a80f-6cd596f7bd46", 00:22:03.459 "assigned_rate_limits": { 00:22:03.459 "rw_ios_per_sec": 0, 00:22:03.459 "rw_mbytes_per_sec": 0, 00:22:03.459 "r_mbytes_per_sec": 0, 00:22:03.459 "w_mbytes_per_sec": 0 00:22:03.459 }, 00:22:03.459 "claimed": false, 00:22:03.459 "zoned": false, 00:22:03.459 "supported_io_types": { 00:22:03.459 "read": true, 00:22:03.459 "write": true, 00:22:03.459 "unmap": true, 00:22:03.459 "flush": true, 00:22:03.459 "reset": true, 00:22:03.459 "nvme_admin": false, 00:22:03.459 "nvme_io": false, 00:22:03.459 "nvme_io_md": false, 00:22:03.459 "write_zeroes": true, 00:22:03.459 "zcopy": true, 00:22:03.459 "get_zone_info": false, 00:22:03.459 "zone_management": false, 00:22:03.459 "zone_append": false, 00:22:03.459 "compare": false, 00:22:03.459 "compare_and_write": false, 00:22:03.459 "abort": true, 00:22:03.459 "seek_hole": false, 00:22:03.459 "seek_data": false, 00:22:03.459 "copy": true, 00:22:03.459 "nvme_iov_md": false 00:22:03.459 }, 00:22:03.459 "memory_domains": [ 00:22:03.459 { 00:22:03.459 "dma_device_id": "system", 00:22:03.459 "dma_device_type": 1 00:22:03.459 }, 00:22:03.459 { 00:22:03.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.459 "dma_device_type": 2 00:22:03.459 } 00:22:03.459 ], 00:22:03.459 "driver_specific": {} 00:22:03.459 } 00:22:03.459 ] 00:22:03.459 00:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:03.459 00:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:03.459 00:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:03.459 00:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:03.459 BaseBdev3 00:22:03.459 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:03.459 00:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:03.459 00:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:03.459 00:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:03.459 00:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:03.459 00:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:03.459 00:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:03.718 00:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:03.977 [ 00:22:03.977 { 00:22:03.977 "name": "BaseBdev3", 00:22:03.977 "aliases": [ 00:22:03.977 "2afcd9f5-6759-4959-b486-9a593ee68320" 00:22:03.977 ], 00:22:03.977 "product_name": "Malloc disk", 00:22:03.977 "block_size": 512, 00:22:03.977 "num_blocks": 65536, 00:22:03.977 "uuid": "2afcd9f5-6759-4959-b486-9a593ee68320", 00:22:03.977 "assigned_rate_limits": { 00:22:03.977 "rw_ios_per_sec": 0, 00:22:03.977 "rw_mbytes_per_sec": 0, 00:22:03.977 "r_mbytes_per_sec": 0, 00:22:03.977 "w_mbytes_per_sec": 0 00:22:03.977 }, 00:22:03.977 "claimed": false, 00:22:03.977 "zoned": false, 00:22:03.977 "supported_io_types": { 00:22:03.977 "read": true, 00:22:03.977 "write": true, 00:22:03.977 "unmap": true, 00:22:03.977 "flush": true, 00:22:03.977 "reset": true, 00:22:03.977 "nvme_admin": false, 00:22:03.977 "nvme_io": false, 00:22:03.977 "nvme_io_md": false, 00:22:03.977 "write_zeroes": true, 00:22:03.977 "zcopy": true, 00:22:03.977 "get_zone_info": false, 00:22:03.977 "zone_management": false, 00:22:03.977 "zone_append": false, 00:22:03.977 "compare": false, 00:22:03.977 "compare_and_write": false, 00:22:03.977 "abort": true, 00:22:03.977 "seek_hole": false, 00:22:03.977 "seek_data": false, 00:22:03.977 "copy": true, 00:22:03.977 "nvme_iov_md": false 00:22:03.977 }, 00:22:03.977 "memory_domains": [ 00:22:03.977 { 00:22:03.977 "dma_device_id": "system", 00:22:03.977 "dma_device_type": 1 00:22:03.977 }, 00:22:03.977 { 00:22:03.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.977 "dma_device_type": 2 00:22:03.977 } 00:22:03.977 ], 00:22:03.977 "driver_specific": {} 00:22:03.977 } 00:22:03.977 ] 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:03.977 [2024-07-25 00:48:26.588406] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:03.977 [2024-07-25 00:48:26.588461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:03.977 [2024-07-25 00:48:26.588481] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:03.977 [2024-07-25 00:48:26.590311] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.977 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.237 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.237 "name": "Existed_Raid", 00:22:04.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.237 "strip_size_kb": 0, 00:22:04.237 "state": "configuring", 00:22:04.237 "raid_level": "raid1", 00:22:04.237 "superblock": false, 00:22:04.237 "num_base_bdevs": 3, 00:22:04.237 "num_base_bdevs_discovered": 2, 00:22:04.237 "num_base_bdevs_operational": 3, 00:22:04.237 "base_bdevs_list": [ 00:22:04.237 { 00:22:04.237 "name": "BaseBdev1", 00:22:04.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.237 "is_configured": false, 00:22:04.237 "data_offset": 0, 00:22:04.237 "data_size": 0 00:22:04.237 }, 00:22:04.237 { 00:22:04.237 "name": "BaseBdev2", 00:22:04.237 "uuid": "e252f665-8c43-4640-a80f-6cd596f7bd46", 00:22:04.237 "is_configured": true, 00:22:04.237 "data_offset": 0, 00:22:04.237 "data_size": 65536 00:22:04.237 }, 00:22:04.237 { 00:22:04.237 "name": "BaseBdev3", 00:22:04.237 "uuid": "2afcd9f5-6759-4959-b486-9a593ee68320", 00:22:04.237 "is_configured": true, 00:22:04.237 "data_offset": 0, 00:22:04.237 "data_size": 65536 00:22:04.237 } 00:22:04.237 ] 00:22:04.237 }' 00:22:04.237 00:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.237 00:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.807 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:05.088 [2024-07-25 00:48:27.669176] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.088 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.356 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.357 "name": "Existed_Raid", 00:22:05.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.357 "strip_size_kb": 0, 00:22:05.357 "state": "configuring", 00:22:05.357 "raid_level": "raid1", 00:22:05.357 "superblock": false, 00:22:05.357 "num_base_bdevs": 3, 00:22:05.357 "num_base_bdevs_discovered": 1, 00:22:05.357 "num_base_bdevs_operational": 3, 00:22:05.357 "base_bdevs_list": [ 00:22:05.357 { 00:22:05.357 "name": "BaseBdev1", 00:22:05.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.357 "is_configured": false, 00:22:05.357 "data_offset": 0, 00:22:05.357 "data_size": 0 00:22:05.357 }, 00:22:05.357 { 00:22:05.357 "name": null, 00:22:05.357 "uuid": "e252f665-8c43-4640-a80f-6cd596f7bd46", 00:22:05.357 "is_configured": false, 00:22:05.357 "data_offset": 0, 00:22:05.357 "data_size": 65536 00:22:05.357 }, 00:22:05.357 { 00:22:05.357 "name": "BaseBdev3", 00:22:05.357 "uuid": "2afcd9f5-6759-4959-b486-9a593ee68320", 00:22:05.357 "is_configured": true, 00:22:05.357 "data_offset": 0, 00:22:05.357 "data_size": 65536 00:22:05.357 } 00:22:05.357 ] 00:22:05.357 }' 00:22:05.357 00:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.357 00:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.925 00:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.925 00:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:06.184 00:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:06.184 00:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:06.443 [2024-07-25 00:48:28.940963] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:06.443 BaseBdev1 00:22:06.443 00:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:06.443 00:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:06.443 00:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:06.443 00:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:06.443 00:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:06.443 00:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:06.443 00:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:06.702 00:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:06.961 [ 00:22:06.961 { 00:22:06.961 "name": "BaseBdev1", 00:22:06.961 "aliases": [ 00:22:06.961 "69097345-5b3a-47bf-ac0a-2a585dfaae25" 00:22:06.961 ], 00:22:06.961 "product_name": "Malloc disk", 00:22:06.962 "block_size": 512, 00:22:06.962 "num_blocks": 65536, 00:22:06.962 "uuid": "69097345-5b3a-47bf-ac0a-2a585dfaae25", 00:22:06.962 "assigned_rate_limits": { 00:22:06.962 "rw_ios_per_sec": 0, 00:22:06.962 "rw_mbytes_per_sec": 0, 00:22:06.962 "r_mbytes_per_sec": 0, 00:22:06.962 "w_mbytes_per_sec": 0 00:22:06.962 }, 00:22:06.962 "claimed": true, 00:22:06.962 "claim_type": "exclusive_write", 00:22:06.962 "zoned": false, 00:22:06.962 "supported_io_types": { 00:22:06.962 "read": true, 00:22:06.962 "write": true, 00:22:06.962 "unmap": true, 00:22:06.962 "flush": true, 00:22:06.962 "reset": true, 00:22:06.962 "nvme_admin": false, 00:22:06.962 "nvme_io": false, 00:22:06.962 "nvme_io_md": false, 00:22:06.962 "write_zeroes": true, 00:22:06.962 "zcopy": true, 00:22:06.962 "get_zone_info": false, 00:22:06.962 "zone_management": false, 00:22:06.962 "zone_append": false, 00:22:06.962 "compare": false, 00:22:06.962 "compare_and_write": false, 00:22:06.962 "abort": true, 00:22:06.962 "seek_hole": false, 00:22:06.962 "seek_data": false, 00:22:06.962 "copy": true, 00:22:06.962 "nvme_iov_md": false 00:22:06.962 }, 00:22:06.962 "memory_domains": [ 00:22:06.962 { 00:22:06.962 "dma_device_id": "system", 00:22:06.962 "dma_device_type": 1 00:22:06.962 }, 00:22:06.962 { 00:22:06.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.962 "dma_device_type": 2 00:22:06.962 } 00:22:06.962 ], 00:22:06.962 "driver_specific": {} 00:22:06.962 } 00:22:06.962 ] 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.962 "name": "Existed_Raid", 00:22:06.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.962 "strip_size_kb": 0, 00:22:06.962 "state": "configuring", 00:22:06.962 "raid_level": "raid1", 00:22:06.962 "superblock": false, 00:22:06.962 "num_base_bdevs": 3, 00:22:06.962 "num_base_bdevs_discovered": 2, 00:22:06.962 "num_base_bdevs_operational": 3, 00:22:06.962 "base_bdevs_list": [ 00:22:06.962 { 00:22:06.962 "name": "BaseBdev1", 00:22:06.962 "uuid": "69097345-5b3a-47bf-ac0a-2a585dfaae25", 00:22:06.962 "is_configured": true, 00:22:06.962 "data_offset": 0, 00:22:06.962 "data_size": 65536 00:22:06.962 }, 00:22:06.962 { 00:22:06.962 "name": null, 00:22:06.962 "uuid": "e252f665-8c43-4640-a80f-6cd596f7bd46", 00:22:06.962 "is_configured": false, 00:22:06.962 "data_offset": 0, 00:22:06.962 "data_size": 65536 00:22:06.962 }, 00:22:06.962 { 00:22:06.962 "name": "BaseBdev3", 00:22:06.962 "uuid": "2afcd9f5-6759-4959-b486-9a593ee68320", 00:22:06.962 "is_configured": true, 00:22:06.962 "data_offset": 0, 00:22:06.962 "data_size": 65536 00:22:06.962 } 00:22:06.962 ] 00:22:06.962 }' 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.962 00:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.530 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.530 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:07.789 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:07.789 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:08.048 [2024-07-25 00:48:30.549264] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.048 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.306 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.306 "name": "Existed_Raid", 00:22:08.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.306 "strip_size_kb": 0, 00:22:08.306 "state": "configuring", 00:22:08.306 "raid_level": "raid1", 00:22:08.306 "superblock": false, 00:22:08.306 "num_base_bdevs": 3, 00:22:08.306 "num_base_bdevs_discovered": 1, 00:22:08.306 "num_base_bdevs_operational": 3, 00:22:08.306 "base_bdevs_list": [ 00:22:08.306 { 00:22:08.306 "name": "BaseBdev1", 00:22:08.306 "uuid": "69097345-5b3a-47bf-ac0a-2a585dfaae25", 00:22:08.306 "is_configured": true, 00:22:08.306 "data_offset": 0, 00:22:08.306 "data_size": 65536 00:22:08.306 }, 00:22:08.306 { 00:22:08.306 "name": null, 00:22:08.306 "uuid": "e252f665-8c43-4640-a80f-6cd596f7bd46", 00:22:08.306 "is_configured": false, 00:22:08.306 "data_offset": 0, 00:22:08.306 "data_size": 65536 00:22:08.306 }, 00:22:08.306 { 00:22:08.306 "name": null, 00:22:08.306 "uuid": "2afcd9f5-6759-4959-b486-9a593ee68320", 00:22:08.306 "is_configured": false, 00:22:08.306 "data_offset": 0, 00:22:08.306 "data_size": 65536 00:22:08.306 } 00:22:08.306 ] 00:22:08.306 }' 00:22:08.306 00:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.306 00:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.872 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.872 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:09.131 [2024-07-25 00:48:31.737520] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.131 00:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.388 00:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:09.388 "name": "Existed_Raid", 00:22:09.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.388 "strip_size_kb": 0, 00:22:09.388 "state": "configuring", 00:22:09.388 "raid_level": "raid1", 00:22:09.388 "superblock": false, 00:22:09.388 "num_base_bdevs": 3, 00:22:09.388 "num_base_bdevs_discovered": 2, 00:22:09.388 "num_base_bdevs_operational": 3, 00:22:09.388 "base_bdevs_list": [ 00:22:09.388 { 00:22:09.388 "name": "BaseBdev1", 00:22:09.388 "uuid": "69097345-5b3a-47bf-ac0a-2a585dfaae25", 00:22:09.388 "is_configured": true, 00:22:09.388 "data_offset": 0, 00:22:09.388 "data_size": 65536 00:22:09.388 }, 00:22:09.388 { 00:22:09.388 "name": null, 00:22:09.388 "uuid": "e252f665-8c43-4640-a80f-6cd596f7bd46", 00:22:09.388 "is_configured": false, 00:22:09.388 "data_offset": 0, 00:22:09.388 "data_size": 65536 00:22:09.388 }, 00:22:09.388 { 00:22:09.388 "name": "BaseBdev3", 00:22:09.388 "uuid": "2afcd9f5-6759-4959-b486-9a593ee68320", 00:22:09.388 "is_configured": true, 00:22:09.388 "data_offset": 0, 00:22:09.388 "data_size": 65536 00:22:09.388 } 00:22:09.388 ] 00:22:09.388 }' 00:22:09.388 00:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:09.388 00:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.953 00:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:09.953 00:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.211 00:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:10.211 00:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:10.470 [2024-07-25 00:48:33.033839] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.728 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.728 "name": "Existed_Raid", 00:22:10.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.728 "strip_size_kb": 0, 00:22:10.728 "state": "configuring", 00:22:10.728 "raid_level": "raid1", 00:22:10.728 "superblock": false, 00:22:10.728 "num_base_bdevs": 3, 00:22:10.728 "num_base_bdevs_discovered": 1, 00:22:10.728 "num_base_bdevs_operational": 3, 00:22:10.728 "base_bdevs_list": [ 00:22:10.728 { 00:22:10.728 "name": null, 00:22:10.728 "uuid": "69097345-5b3a-47bf-ac0a-2a585dfaae25", 00:22:10.728 "is_configured": false, 00:22:10.728 "data_offset": 0, 00:22:10.728 "data_size": 65536 00:22:10.728 }, 00:22:10.728 { 00:22:10.728 "name": null, 00:22:10.728 "uuid": "e252f665-8c43-4640-a80f-6cd596f7bd46", 00:22:10.728 "is_configured": false, 00:22:10.728 "data_offset": 0, 00:22:10.728 "data_size": 65536 00:22:10.728 }, 00:22:10.728 { 00:22:10.728 "name": "BaseBdev3", 00:22:10.728 "uuid": "2afcd9f5-6759-4959-b486-9a593ee68320", 00:22:10.728 "is_configured": true, 00:22:10.728 "data_offset": 0, 00:22:10.728 "data_size": 65536 00:22:10.728 } 00:22:10.728 ] 00:22:10.728 }' 00:22:10.729 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.729 00:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.295 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.295 00:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:11.554 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:11.554 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:11.811 [2024-07-25 00:48:34.291996] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:11.811 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:11.811 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.811 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:11.811 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:11.811 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:11.811 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:11.811 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.811 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.811 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.811 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.812 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.812 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.070 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.070 "name": "Existed_Raid", 00:22:12.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.070 "strip_size_kb": 0, 00:22:12.070 "state": "configuring", 00:22:12.070 "raid_level": "raid1", 00:22:12.070 "superblock": false, 00:22:12.070 "num_base_bdevs": 3, 00:22:12.070 "num_base_bdevs_discovered": 2, 00:22:12.070 "num_base_bdevs_operational": 3, 00:22:12.070 "base_bdevs_list": [ 00:22:12.070 { 00:22:12.070 "name": null, 00:22:12.070 "uuid": "69097345-5b3a-47bf-ac0a-2a585dfaae25", 00:22:12.070 "is_configured": false, 00:22:12.070 "data_offset": 0, 00:22:12.070 "data_size": 65536 00:22:12.070 }, 00:22:12.070 { 00:22:12.070 "name": "BaseBdev2", 00:22:12.070 "uuid": "e252f665-8c43-4640-a80f-6cd596f7bd46", 00:22:12.070 "is_configured": true, 00:22:12.070 "data_offset": 0, 00:22:12.070 "data_size": 65536 00:22:12.070 }, 00:22:12.070 { 00:22:12.070 "name": "BaseBdev3", 00:22:12.070 "uuid": "2afcd9f5-6759-4959-b486-9a593ee68320", 00:22:12.070 "is_configured": true, 00:22:12.070 "data_offset": 0, 00:22:12.070 "data_size": 65536 00:22:12.070 } 00:22:12.070 ] 00:22:12.070 }' 00:22:12.070 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.070 00:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.637 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:12.637 00:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.637 00:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:12.637 00:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.637 00:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:12.895 00:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 69097345-5b3a-47bf-ac0a-2a585dfaae25 00:22:13.154 [2024-07-25 00:48:35.690787] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:13.154 [2024-07-25 00:48:35.691063] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:22:13.154 [2024-07-25 00:48:35.691105] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:13.154 [2024-07-25 00:48:35.691355] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:13.154 [2024-07-25 00:48:35.691804] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:22:13.154 [2024-07-25 00:48:35.691917] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:22:13.154 [2024-07-25 00:48:35.692253] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.154 NewBaseBdev 00:22:13.154 00:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:13.154 00:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:13.154 00:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:13.154 00:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:13.154 00:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:13.154 00:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:13.154 00:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:13.412 00:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:13.670 [ 00:22:13.670 { 00:22:13.670 "name": "NewBaseBdev", 00:22:13.670 "aliases": [ 00:22:13.670 "69097345-5b3a-47bf-ac0a-2a585dfaae25" 00:22:13.670 ], 00:22:13.670 "product_name": "Malloc disk", 00:22:13.670 "block_size": 512, 00:22:13.670 "num_blocks": 65536, 00:22:13.670 "uuid": "69097345-5b3a-47bf-ac0a-2a585dfaae25", 00:22:13.670 "assigned_rate_limits": { 00:22:13.670 "rw_ios_per_sec": 0, 00:22:13.670 "rw_mbytes_per_sec": 0, 00:22:13.670 "r_mbytes_per_sec": 0, 00:22:13.670 "w_mbytes_per_sec": 0 00:22:13.670 }, 00:22:13.670 "claimed": true, 00:22:13.670 "claim_type": "exclusive_write", 00:22:13.670 "zoned": false, 00:22:13.670 "supported_io_types": { 00:22:13.670 "read": true, 00:22:13.670 "write": true, 00:22:13.670 "unmap": true, 00:22:13.670 "flush": true, 00:22:13.670 "reset": true, 00:22:13.670 "nvme_admin": false, 00:22:13.670 "nvme_io": false, 00:22:13.670 "nvme_io_md": false, 00:22:13.670 "write_zeroes": true, 00:22:13.670 "zcopy": true, 00:22:13.670 "get_zone_info": false, 00:22:13.670 "zone_management": false, 00:22:13.670 "zone_append": false, 00:22:13.670 "compare": false, 00:22:13.670 "compare_and_write": false, 00:22:13.670 "abort": true, 00:22:13.670 "seek_hole": false, 00:22:13.670 "seek_data": false, 00:22:13.670 "copy": true, 00:22:13.670 "nvme_iov_md": false 00:22:13.670 }, 00:22:13.670 "memory_domains": [ 00:22:13.670 { 00:22:13.670 "dma_device_id": "system", 00:22:13.670 "dma_device_type": 1 00:22:13.670 }, 00:22:13.670 { 00:22:13.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.670 "dma_device_type": 2 00:22:13.670 } 00:22:13.670 ], 00:22:13.670 "driver_specific": {} 00:22:13.670 } 00:22:13.670 ] 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.670 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.928 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.928 "name": "Existed_Raid", 00:22:13.928 "uuid": "c89fedb1-b2fb-470e-a6d9-6a29d56dde18", 00:22:13.928 "strip_size_kb": 0, 00:22:13.928 "state": "online", 00:22:13.928 "raid_level": "raid1", 00:22:13.928 "superblock": false, 00:22:13.928 "num_base_bdevs": 3, 00:22:13.928 "num_base_bdevs_discovered": 3, 00:22:13.928 "num_base_bdevs_operational": 3, 00:22:13.928 "base_bdevs_list": [ 00:22:13.928 { 00:22:13.928 "name": "NewBaseBdev", 00:22:13.928 "uuid": "69097345-5b3a-47bf-ac0a-2a585dfaae25", 00:22:13.928 "is_configured": true, 00:22:13.929 "data_offset": 0, 00:22:13.929 "data_size": 65536 00:22:13.929 }, 00:22:13.929 { 00:22:13.929 "name": "BaseBdev2", 00:22:13.929 "uuid": "e252f665-8c43-4640-a80f-6cd596f7bd46", 00:22:13.929 "is_configured": true, 00:22:13.929 "data_offset": 0, 00:22:13.929 "data_size": 65536 00:22:13.929 }, 00:22:13.929 { 00:22:13.929 "name": "BaseBdev3", 00:22:13.929 "uuid": "2afcd9f5-6759-4959-b486-9a593ee68320", 00:22:13.929 "is_configured": true, 00:22:13.929 "data_offset": 0, 00:22:13.929 "data_size": 65536 00:22:13.929 } 00:22:13.929 ] 00:22:13.929 }' 00:22:13.929 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.929 00:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.496 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:14.496 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:14.496 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:14.496 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:14.496 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:14.496 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:14.496 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:14.496 00:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:14.496 [2024-07-25 00:48:37.095292] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.496 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:14.496 "name": "Existed_Raid", 00:22:14.496 "aliases": [ 00:22:14.496 "c89fedb1-b2fb-470e-a6d9-6a29d56dde18" 00:22:14.496 ], 00:22:14.496 "product_name": "Raid Volume", 00:22:14.496 "block_size": 512, 00:22:14.496 "num_blocks": 65536, 00:22:14.496 "uuid": "c89fedb1-b2fb-470e-a6d9-6a29d56dde18", 00:22:14.496 "assigned_rate_limits": { 00:22:14.496 "rw_ios_per_sec": 0, 00:22:14.496 "rw_mbytes_per_sec": 0, 00:22:14.496 "r_mbytes_per_sec": 0, 00:22:14.496 "w_mbytes_per_sec": 0 00:22:14.496 }, 00:22:14.496 "claimed": false, 00:22:14.496 "zoned": false, 00:22:14.497 "supported_io_types": { 00:22:14.497 "read": true, 00:22:14.497 "write": true, 00:22:14.497 "unmap": false, 00:22:14.497 "flush": false, 00:22:14.497 "reset": true, 00:22:14.497 "nvme_admin": false, 00:22:14.497 "nvme_io": false, 00:22:14.497 "nvme_io_md": false, 00:22:14.497 "write_zeroes": true, 00:22:14.497 "zcopy": false, 00:22:14.497 "get_zone_info": false, 00:22:14.497 "zone_management": false, 00:22:14.497 "zone_append": false, 00:22:14.497 "compare": false, 00:22:14.497 "compare_and_write": false, 00:22:14.497 "abort": false, 00:22:14.497 "seek_hole": false, 00:22:14.497 "seek_data": false, 00:22:14.497 "copy": false, 00:22:14.497 "nvme_iov_md": false 00:22:14.497 }, 00:22:14.497 "memory_domains": [ 00:22:14.497 { 00:22:14.497 "dma_device_id": "system", 00:22:14.497 "dma_device_type": 1 00:22:14.497 }, 00:22:14.497 { 00:22:14.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.497 "dma_device_type": 2 00:22:14.497 }, 00:22:14.497 { 00:22:14.497 "dma_device_id": "system", 00:22:14.497 "dma_device_type": 1 00:22:14.497 }, 00:22:14.497 { 00:22:14.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.497 "dma_device_type": 2 00:22:14.497 }, 00:22:14.497 { 00:22:14.497 "dma_device_id": "system", 00:22:14.497 "dma_device_type": 1 00:22:14.497 }, 00:22:14.497 { 00:22:14.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.497 "dma_device_type": 2 00:22:14.497 } 00:22:14.497 ], 00:22:14.497 "driver_specific": { 00:22:14.497 "raid": { 00:22:14.497 "uuid": "c89fedb1-b2fb-470e-a6d9-6a29d56dde18", 00:22:14.497 "strip_size_kb": 0, 00:22:14.497 "state": "online", 00:22:14.497 "raid_level": "raid1", 00:22:14.497 "superblock": false, 00:22:14.497 "num_base_bdevs": 3, 00:22:14.497 "num_base_bdevs_discovered": 3, 00:22:14.497 "num_base_bdevs_operational": 3, 00:22:14.497 "base_bdevs_list": [ 00:22:14.497 { 00:22:14.497 "name": "NewBaseBdev", 00:22:14.497 "uuid": "69097345-5b3a-47bf-ac0a-2a585dfaae25", 00:22:14.497 "is_configured": true, 00:22:14.497 "data_offset": 0, 00:22:14.497 "data_size": 65536 00:22:14.497 }, 00:22:14.497 { 00:22:14.497 "name": "BaseBdev2", 00:22:14.497 "uuid": "e252f665-8c43-4640-a80f-6cd596f7bd46", 00:22:14.497 "is_configured": true, 00:22:14.497 "data_offset": 0, 00:22:14.497 "data_size": 65536 00:22:14.497 }, 00:22:14.497 { 00:22:14.497 "name": "BaseBdev3", 00:22:14.497 "uuid": "2afcd9f5-6759-4959-b486-9a593ee68320", 00:22:14.497 "is_configured": true, 00:22:14.497 "data_offset": 0, 00:22:14.497 "data_size": 65536 00:22:14.497 } 00:22:14.497 ] 00:22:14.497 } 00:22:14.497 } 00:22:14.497 }' 00:22:14.497 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.755 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:14.755 BaseBdev2 00:22:14.755 BaseBdev3' 00:22:14.755 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:14.755 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:14.755 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:14.755 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:14.755 "name": "NewBaseBdev", 00:22:14.755 "aliases": [ 00:22:14.755 "69097345-5b3a-47bf-ac0a-2a585dfaae25" 00:22:14.755 ], 00:22:14.755 "product_name": "Malloc disk", 00:22:14.755 "block_size": 512, 00:22:14.755 "num_blocks": 65536, 00:22:14.755 "uuid": "69097345-5b3a-47bf-ac0a-2a585dfaae25", 00:22:14.755 "assigned_rate_limits": { 00:22:14.755 "rw_ios_per_sec": 0, 00:22:14.755 "rw_mbytes_per_sec": 0, 00:22:14.755 "r_mbytes_per_sec": 0, 00:22:14.755 "w_mbytes_per_sec": 0 00:22:14.755 }, 00:22:14.755 "claimed": true, 00:22:14.755 "claim_type": "exclusive_write", 00:22:14.755 "zoned": false, 00:22:14.755 "supported_io_types": { 00:22:14.755 "read": true, 00:22:14.755 "write": true, 00:22:14.755 "unmap": true, 00:22:14.755 "flush": true, 00:22:14.755 "reset": true, 00:22:14.755 "nvme_admin": false, 00:22:14.755 "nvme_io": false, 00:22:14.755 "nvme_io_md": false, 00:22:14.756 "write_zeroes": true, 00:22:14.756 "zcopy": true, 00:22:14.756 "get_zone_info": false, 00:22:14.756 "zone_management": false, 00:22:14.756 "zone_append": false, 00:22:14.756 "compare": false, 00:22:14.756 "compare_and_write": false, 00:22:14.756 "abort": true, 00:22:14.756 "seek_hole": false, 00:22:14.756 "seek_data": false, 00:22:14.756 "copy": true, 00:22:14.756 "nvme_iov_md": false 00:22:14.756 }, 00:22:14.756 "memory_domains": [ 00:22:14.756 { 00:22:14.756 "dma_device_id": "system", 00:22:14.756 "dma_device_type": 1 00:22:14.756 }, 00:22:14.756 { 00:22:14.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.756 "dma_device_type": 2 00:22:14.756 } 00:22:14.756 ], 00:22:14.756 "driver_specific": {} 00:22:14.756 }' 00:22:14.756 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.756 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.756 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:14.756 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.014 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.014 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:15.014 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.014 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.014 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.014 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.014 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.014 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.014 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:15.014 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:15.014 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:15.273 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:15.273 "name": "BaseBdev2", 00:22:15.273 "aliases": [ 00:22:15.273 "e252f665-8c43-4640-a80f-6cd596f7bd46" 00:22:15.273 ], 00:22:15.273 "product_name": "Malloc disk", 00:22:15.273 "block_size": 512, 00:22:15.273 "num_blocks": 65536, 00:22:15.273 "uuid": "e252f665-8c43-4640-a80f-6cd596f7bd46", 00:22:15.273 "assigned_rate_limits": { 00:22:15.273 "rw_ios_per_sec": 0, 00:22:15.273 "rw_mbytes_per_sec": 0, 00:22:15.273 "r_mbytes_per_sec": 0, 00:22:15.273 "w_mbytes_per_sec": 0 00:22:15.273 }, 00:22:15.273 "claimed": true, 00:22:15.273 "claim_type": "exclusive_write", 00:22:15.273 "zoned": false, 00:22:15.273 "supported_io_types": { 00:22:15.273 "read": true, 00:22:15.273 "write": true, 00:22:15.273 "unmap": true, 00:22:15.273 "flush": true, 00:22:15.273 "reset": true, 00:22:15.273 "nvme_admin": false, 00:22:15.273 "nvme_io": false, 00:22:15.273 "nvme_io_md": false, 00:22:15.273 "write_zeroes": true, 00:22:15.273 "zcopy": true, 00:22:15.273 "get_zone_info": false, 00:22:15.273 "zone_management": false, 00:22:15.273 "zone_append": false, 00:22:15.273 "compare": false, 00:22:15.273 "compare_and_write": false, 00:22:15.273 "abort": true, 00:22:15.273 "seek_hole": false, 00:22:15.273 "seek_data": false, 00:22:15.273 "copy": true, 00:22:15.273 "nvme_iov_md": false 00:22:15.273 }, 00:22:15.273 "memory_domains": [ 00:22:15.273 { 00:22:15.273 "dma_device_id": "system", 00:22:15.273 "dma_device_type": 1 00:22:15.273 }, 00:22:15.273 { 00:22:15.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.273 "dma_device_type": 2 00:22:15.273 } 00:22:15.273 ], 00:22:15.273 "driver_specific": {} 00:22:15.273 }' 00:22:15.273 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.531 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.531 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:15.531 00:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.531 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.531 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:15.531 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.531 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.531 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.531 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.790 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.790 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.790 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:15.790 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:15.790 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:16.048 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:16.048 "name": "BaseBdev3", 00:22:16.048 "aliases": [ 00:22:16.048 "2afcd9f5-6759-4959-b486-9a593ee68320" 00:22:16.048 ], 00:22:16.048 "product_name": "Malloc disk", 00:22:16.048 "block_size": 512, 00:22:16.048 "num_blocks": 65536, 00:22:16.048 "uuid": "2afcd9f5-6759-4959-b486-9a593ee68320", 00:22:16.048 "assigned_rate_limits": { 00:22:16.048 "rw_ios_per_sec": 0, 00:22:16.048 "rw_mbytes_per_sec": 0, 00:22:16.048 "r_mbytes_per_sec": 0, 00:22:16.048 "w_mbytes_per_sec": 0 00:22:16.048 }, 00:22:16.048 "claimed": true, 00:22:16.048 "claim_type": "exclusive_write", 00:22:16.048 "zoned": false, 00:22:16.048 "supported_io_types": { 00:22:16.048 "read": true, 00:22:16.048 "write": true, 00:22:16.048 "unmap": true, 00:22:16.048 "flush": true, 00:22:16.048 "reset": true, 00:22:16.048 "nvme_admin": false, 00:22:16.048 "nvme_io": false, 00:22:16.048 "nvme_io_md": false, 00:22:16.048 "write_zeroes": true, 00:22:16.048 "zcopy": true, 00:22:16.048 "get_zone_info": false, 00:22:16.048 "zone_management": false, 00:22:16.048 "zone_append": false, 00:22:16.048 "compare": false, 00:22:16.048 "compare_and_write": false, 00:22:16.048 "abort": true, 00:22:16.048 "seek_hole": false, 00:22:16.048 "seek_data": false, 00:22:16.048 "copy": true, 00:22:16.048 "nvme_iov_md": false 00:22:16.048 }, 00:22:16.048 "memory_domains": [ 00:22:16.048 { 00:22:16.048 "dma_device_id": "system", 00:22:16.048 "dma_device_type": 1 00:22:16.048 }, 00:22:16.048 { 00:22:16.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.048 "dma_device_type": 2 00:22:16.048 } 00:22:16.048 ], 00:22:16.048 "driver_specific": {} 00:22:16.048 }' 00:22:16.048 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.048 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.048 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:16.048 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.048 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.307 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:16.307 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.307 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.307 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:16.307 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.307 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.307 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:16.307 00:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:16.565 [2024-07-25 00:48:39.143371] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:16.565 [2024-07-25 00:48:39.143512] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.565 [2024-07-25 00:48:39.143752] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.565 [2024-07-25 00:48:39.144149] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.565 [2024-07-25 00:48:39.144250] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:22:16.565 00:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 131880 00:22:16.565 00:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 131880 ']' 00:22:16.565 00:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 131880 00:22:16.565 00:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:22:16.565 00:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.565 00:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131880 00:22:16.565 00:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:16.565 00:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:16.565 00:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131880' 00:22:16.565 killing process with pid 131880 00:22:16.565 00:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 131880 00:22:16.565 00:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 131880 00:22:16.565 [2024-07-25 00:48:39.186344] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:17.133 [2024-07-25 00:48:39.506946] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:18.511 00:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:18.511 00:22:18.511 real 0m28.164s 00:22:18.511 user 0m50.389s 00:22:18.511 sys 0m4.421s 00:22:18.511 00:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:18.511 ************************************ 00:22:18.511 END TEST raid_state_function_test 00:22:18.511 ************************************ 00:22:18.511 00:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.511 00:48:41 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:22:18.511 00:48:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:18.511 00:48:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:18.511 00:48:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:18.511 ************************************ 00:22:18.511 START TEST raid_state_function_test_sb 00:22:18.511 ************************************ 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=132842 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 132842' 00:22:18.511 Process raid pid: 132842 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 132842 /var/tmp/spdk-raid.sock 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 132842 ']' 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:18.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.511 00:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.770 [2024-07-25 00:48:41.167759] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:22:18.770 [2024-07-25 00:48:41.168217] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.770 [2024-07-25 00:48:41.358152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.029 [2024-07-25 00:48:41.628254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.287 [2024-07-25 00:48:41.833961] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:19.546 00:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.546 00:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:22:19.546 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:19.806 [2024-07-25 00:48:42.278794] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:19.806 [2024-07-25 00:48:42.279079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:19.806 [2024-07-25 00:48:42.279219] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:19.806 [2024-07-25 00:48:42.279293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:19.806 [2024-07-25 00:48:42.279368] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:19.806 [2024-07-25 00:48:42.279412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.806 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.065 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.065 "name": "Existed_Raid", 00:22:20.065 "uuid": "8afc6496-a027-4133-ac76-8fb833b0d244", 00:22:20.065 "strip_size_kb": 0, 00:22:20.065 "state": "configuring", 00:22:20.065 "raid_level": "raid1", 00:22:20.065 "superblock": true, 00:22:20.065 "num_base_bdevs": 3, 00:22:20.065 "num_base_bdevs_discovered": 0, 00:22:20.065 "num_base_bdevs_operational": 3, 00:22:20.065 "base_bdevs_list": [ 00:22:20.065 { 00:22:20.065 "name": "BaseBdev1", 00:22:20.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.065 "is_configured": false, 00:22:20.065 "data_offset": 0, 00:22:20.065 "data_size": 0 00:22:20.065 }, 00:22:20.065 { 00:22:20.065 "name": "BaseBdev2", 00:22:20.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.065 "is_configured": false, 00:22:20.065 "data_offset": 0, 00:22:20.065 "data_size": 0 00:22:20.065 }, 00:22:20.065 { 00:22:20.065 "name": "BaseBdev3", 00:22:20.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.065 "is_configured": false, 00:22:20.065 "data_offset": 0, 00:22:20.065 "data_size": 0 00:22:20.065 } 00:22:20.065 ] 00:22:20.065 }' 00:22:20.065 00:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.065 00:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.634 00:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:20.634 [2024-07-25 00:48:43.262848] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:20.634 [2024-07-25 00:48:43.262995] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:22:20.634 00:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:20.908 [2024-07-25 00:48:43.490911] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:20.908 [2024-07-25 00:48:43.491141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:20.908 [2024-07-25 00:48:43.491219] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:20.908 [2024-07-25 00:48:43.491271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:20.908 [2024-07-25 00:48:43.491297] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:20.908 [2024-07-25 00:48:43.491339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:20.908 00:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:21.173 [2024-07-25 00:48:43.685695] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:21.173 BaseBdev1 00:22:21.173 00:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:21.173 00:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:21.173 00:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:21.173 00:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:21.173 00:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:21.173 00:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:21.173 00:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:21.432 00:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:21.432 [ 00:22:21.432 { 00:22:21.432 "name": "BaseBdev1", 00:22:21.432 "aliases": [ 00:22:21.432 "62cafa84-7c2a-45f3-bfe6-f20b2c078fad" 00:22:21.432 ], 00:22:21.432 "product_name": "Malloc disk", 00:22:21.432 "block_size": 512, 00:22:21.432 "num_blocks": 65536, 00:22:21.432 "uuid": "62cafa84-7c2a-45f3-bfe6-f20b2c078fad", 00:22:21.432 "assigned_rate_limits": { 00:22:21.432 "rw_ios_per_sec": 0, 00:22:21.432 "rw_mbytes_per_sec": 0, 00:22:21.432 "r_mbytes_per_sec": 0, 00:22:21.432 "w_mbytes_per_sec": 0 00:22:21.432 }, 00:22:21.432 "claimed": true, 00:22:21.432 "claim_type": "exclusive_write", 00:22:21.432 "zoned": false, 00:22:21.432 "supported_io_types": { 00:22:21.432 "read": true, 00:22:21.432 "write": true, 00:22:21.432 "unmap": true, 00:22:21.432 "flush": true, 00:22:21.432 "reset": true, 00:22:21.432 "nvme_admin": false, 00:22:21.432 "nvme_io": false, 00:22:21.432 "nvme_io_md": false, 00:22:21.432 "write_zeroes": true, 00:22:21.432 "zcopy": true, 00:22:21.432 "get_zone_info": false, 00:22:21.432 "zone_management": false, 00:22:21.432 "zone_append": false, 00:22:21.432 "compare": false, 00:22:21.432 "compare_and_write": false, 00:22:21.432 "abort": true, 00:22:21.432 "seek_hole": false, 00:22:21.432 "seek_data": false, 00:22:21.432 "copy": true, 00:22:21.432 "nvme_iov_md": false 00:22:21.432 }, 00:22:21.432 "memory_domains": [ 00:22:21.432 { 00:22:21.432 "dma_device_id": "system", 00:22:21.432 "dma_device_type": 1 00:22:21.432 }, 00:22:21.432 { 00:22:21.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.432 "dma_device_type": 2 00:22:21.432 } 00:22:21.432 ], 00:22:21.432 "driver_specific": {} 00:22:21.432 } 00:22:21.432 ] 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.432 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.690 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:21.691 "name": "Existed_Raid", 00:22:21.691 "uuid": "9bcdbd29-37f9-4211-b33e-ee6698d06b0e", 00:22:21.691 "strip_size_kb": 0, 00:22:21.691 "state": "configuring", 00:22:21.691 "raid_level": "raid1", 00:22:21.691 "superblock": true, 00:22:21.691 "num_base_bdevs": 3, 00:22:21.691 "num_base_bdevs_discovered": 1, 00:22:21.691 "num_base_bdevs_operational": 3, 00:22:21.691 "base_bdevs_list": [ 00:22:21.691 { 00:22:21.691 "name": "BaseBdev1", 00:22:21.691 "uuid": "62cafa84-7c2a-45f3-bfe6-f20b2c078fad", 00:22:21.691 "is_configured": true, 00:22:21.691 "data_offset": 2048, 00:22:21.691 "data_size": 63488 00:22:21.691 }, 00:22:21.691 { 00:22:21.691 "name": "BaseBdev2", 00:22:21.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.691 "is_configured": false, 00:22:21.691 "data_offset": 0, 00:22:21.691 "data_size": 0 00:22:21.691 }, 00:22:21.691 { 00:22:21.691 "name": "BaseBdev3", 00:22:21.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.691 "is_configured": false, 00:22:21.691 "data_offset": 0, 00:22:21.691 "data_size": 0 00:22:21.691 } 00:22:21.691 ] 00:22:21.691 }' 00:22:21.691 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:21.691 00:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.259 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:22.518 [2024-07-25 00:48:44.925907] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:22.518 [2024-07-25 00:48:44.926111] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:22:22.518 00:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:22.518 [2024-07-25 00:48:45.105987] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.518 [2024-07-25 00:48:45.108042] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:22.518 [2024-07-25 00:48:45.108239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:22.518 [2024-07-25 00:48:45.108330] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:22.518 [2024-07-25 00:48:45.108403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.518 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.778 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:22.778 "name": "Existed_Raid", 00:22:22.778 "uuid": "0216f818-1f52-419a-ab3d-e80bce049eb0", 00:22:22.778 "strip_size_kb": 0, 00:22:22.778 "state": "configuring", 00:22:22.778 "raid_level": "raid1", 00:22:22.778 "superblock": true, 00:22:22.778 "num_base_bdevs": 3, 00:22:22.778 "num_base_bdevs_discovered": 1, 00:22:22.778 "num_base_bdevs_operational": 3, 00:22:22.778 "base_bdevs_list": [ 00:22:22.778 { 00:22:22.778 "name": "BaseBdev1", 00:22:22.778 "uuid": "62cafa84-7c2a-45f3-bfe6-f20b2c078fad", 00:22:22.778 "is_configured": true, 00:22:22.778 "data_offset": 2048, 00:22:22.778 "data_size": 63488 00:22:22.778 }, 00:22:22.778 { 00:22:22.778 "name": "BaseBdev2", 00:22:22.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.778 "is_configured": false, 00:22:22.778 "data_offset": 0, 00:22:22.778 "data_size": 0 00:22:22.778 }, 00:22:22.778 { 00:22:22.778 "name": "BaseBdev3", 00:22:22.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.778 "is_configured": false, 00:22:22.778 "data_offset": 0, 00:22:22.778 "data_size": 0 00:22:22.778 } 00:22:22.778 ] 00:22:22.778 }' 00:22:22.778 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:22.778 00:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.347 00:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:23.606 [2024-07-25 00:48:46.114968] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:23.606 BaseBdev2 00:22:23.606 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:23.606 00:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:23.606 00:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:23.606 00:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:23.606 00:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:23.606 00:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:23.606 00:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:23.866 [ 00:22:23.866 { 00:22:23.866 "name": "BaseBdev2", 00:22:23.866 "aliases": [ 00:22:23.866 "eaad7db5-9501-4dd2-9b0c-4d44124839cf" 00:22:23.866 ], 00:22:23.866 "product_name": "Malloc disk", 00:22:23.866 "block_size": 512, 00:22:23.866 "num_blocks": 65536, 00:22:23.866 "uuid": "eaad7db5-9501-4dd2-9b0c-4d44124839cf", 00:22:23.866 "assigned_rate_limits": { 00:22:23.866 "rw_ios_per_sec": 0, 00:22:23.866 "rw_mbytes_per_sec": 0, 00:22:23.866 "r_mbytes_per_sec": 0, 00:22:23.866 "w_mbytes_per_sec": 0 00:22:23.866 }, 00:22:23.866 "claimed": true, 00:22:23.866 "claim_type": "exclusive_write", 00:22:23.866 "zoned": false, 00:22:23.866 "supported_io_types": { 00:22:23.866 "read": true, 00:22:23.866 "write": true, 00:22:23.866 "unmap": true, 00:22:23.866 "flush": true, 00:22:23.866 "reset": true, 00:22:23.866 "nvme_admin": false, 00:22:23.866 "nvme_io": false, 00:22:23.866 "nvme_io_md": false, 00:22:23.866 "write_zeroes": true, 00:22:23.866 "zcopy": true, 00:22:23.866 "get_zone_info": false, 00:22:23.866 "zone_management": false, 00:22:23.866 "zone_append": false, 00:22:23.866 "compare": false, 00:22:23.866 "compare_and_write": false, 00:22:23.866 "abort": true, 00:22:23.866 "seek_hole": false, 00:22:23.866 "seek_data": false, 00:22:23.866 "copy": true, 00:22:23.866 "nvme_iov_md": false 00:22:23.866 }, 00:22:23.866 "memory_domains": [ 00:22:23.866 { 00:22:23.866 "dma_device_id": "system", 00:22:23.866 "dma_device_type": 1 00:22:23.866 }, 00:22:23.866 { 00:22:23.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.866 "dma_device_type": 2 00:22:23.866 } 00:22:23.866 ], 00:22:23.866 "driver_specific": {} 00:22:23.866 } 00:22:23.866 ] 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.866 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.126 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.126 "name": "Existed_Raid", 00:22:24.126 "uuid": "0216f818-1f52-419a-ab3d-e80bce049eb0", 00:22:24.126 "strip_size_kb": 0, 00:22:24.126 "state": "configuring", 00:22:24.126 "raid_level": "raid1", 00:22:24.126 "superblock": true, 00:22:24.126 "num_base_bdevs": 3, 00:22:24.126 "num_base_bdevs_discovered": 2, 00:22:24.126 "num_base_bdevs_operational": 3, 00:22:24.126 "base_bdevs_list": [ 00:22:24.126 { 00:22:24.126 "name": "BaseBdev1", 00:22:24.126 "uuid": "62cafa84-7c2a-45f3-bfe6-f20b2c078fad", 00:22:24.126 "is_configured": true, 00:22:24.126 "data_offset": 2048, 00:22:24.126 "data_size": 63488 00:22:24.126 }, 00:22:24.126 { 00:22:24.126 "name": "BaseBdev2", 00:22:24.126 "uuid": "eaad7db5-9501-4dd2-9b0c-4d44124839cf", 00:22:24.126 "is_configured": true, 00:22:24.126 "data_offset": 2048, 00:22:24.126 "data_size": 63488 00:22:24.126 }, 00:22:24.126 { 00:22:24.126 "name": "BaseBdev3", 00:22:24.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.126 "is_configured": false, 00:22:24.126 "data_offset": 0, 00:22:24.126 "data_size": 0 00:22:24.126 } 00:22:24.126 ] 00:22:24.126 }' 00:22:24.126 00:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.126 00:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.765 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:25.044 [2024-07-25 00:48:47.521227] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.044 [2024-07-25 00:48:47.521641] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:22:25.044 [2024-07-25 00:48:47.521787] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:25.044 [2024-07-25 00:48:47.521940] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:25.044 [2024-07-25 00:48:47.522478] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:22:25.044 BaseBdev3 00:22:25.044 [2024-07-25 00:48:47.522590] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:22:25.044 [2024-07-25 00:48:47.522763] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.044 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:25.044 00:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:25.044 00:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:25.044 00:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:25.044 00:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:25.044 00:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:25.044 00:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:25.303 00:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:25.562 [ 00:22:25.562 { 00:22:25.562 "name": "BaseBdev3", 00:22:25.562 "aliases": [ 00:22:25.562 "af2d60dd-224f-44e4-8038-9bce9e85103a" 00:22:25.562 ], 00:22:25.562 "product_name": "Malloc disk", 00:22:25.562 "block_size": 512, 00:22:25.562 "num_blocks": 65536, 00:22:25.562 "uuid": "af2d60dd-224f-44e4-8038-9bce9e85103a", 00:22:25.562 "assigned_rate_limits": { 00:22:25.562 "rw_ios_per_sec": 0, 00:22:25.562 "rw_mbytes_per_sec": 0, 00:22:25.562 "r_mbytes_per_sec": 0, 00:22:25.562 "w_mbytes_per_sec": 0 00:22:25.562 }, 00:22:25.562 "claimed": true, 00:22:25.562 "claim_type": "exclusive_write", 00:22:25.562 "zoned": false, 00:22:25.562 "supported_io_types": { 00:22:25.562 "read": true, 00:22:25.562 "write": true, 00:22:25.562 "unmap": true, 00:22:25.562 "flush": true, 00:22:25.563 "reset": true, 00:22:25.563 "nvme_admin": false, 00:22:25.563 "nvme_io": false, 00:22:25.563 "nvme_io_md": false, 00:22:25.563 "write_zeroes": true, 00:22:25.563 "zcopy": true, 00:22:25.563 "get_zone_info": false, 00:22:25.563 "zone_management": false, 00:22:25.563 "zone_append": false, 00:22:25.563 "compare": false, 00:22:25.563 "compare_and_write": false, 00:22:25.563 "abort": true, 00:22:25.563 "seek_hole": false, 00:22:25.563 "seek_data": false, 00:22:25.563 "copy": true, 00:22:25.563 "nvme_iov_md": false 00:22:25.563 }, 00:22:25.563 "memory_domains": [ 00:22:25.563 { 00:22:25.563 "dma_device_id": "system", 00:22:25.563 "dma_device_type": 1 00:22:25.563 }, 00:22:25.563 { 00:22:25.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.563 "dma_device_type": 2 00:22:25.563 } 00:22:25.563 ], 00:22:25.563 "driver_specific": {} 00:22:25.563 } 00:22:25.563 ] 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.563 00:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.563 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.563 "name": "Existed_Raid", 00:22:25.563 "uuid": "0216f818-1f52-419a-ab3d-e80bce049eb0", 00:22:25.563 "strip_size_kb": 0, 00:22:25.563 "state": "online", 00:22:25.563 "raid_level": "raid1", 00:22:25.563 "superblock": true, 00:22:25.563 "num_base_bdevs": 3, 00:22:25.563 "num_base_bdevs_discovered": 3, 00:22:25.563 "num_base_bdevs_operational": 3, 00:22:25.563 "base_bdevs_list": [ 00:22:25.563 { 00:22:25.563 "name": "BaseBdev1", 00:22:25.563 "uuid": "62cafa84-7c2a-45f3-bfe6-f20b2c078fad", 00:22:25.563 "is_configured": true, 00:22:25.563 "data_offset": 2048, 00:22:25.563 "data_size": 63488 00:22:25.563 }, 00:22:25.563 { 00:22:25.563 "name": "BaseBdev2", 00:22:25.563 "uuid": "eaad7db5-9501-4dd2-9b0c-4d44124839cf", 00:22:25.563 "is_configured": true, 00:22:25.563 "data_offset": 2048, 00:22:25.563 "data_size": 63488 00:22:25.563 }, 00:22:25.563 { 00:22:25.563 "name": "BaseBdev3", 00:22:25.563 "uuid": "af2d60dd-224f-44e4-8038-9bce9e85103a", 00:22:25.563 "is_configured": true, 00:22:25.563 "data_offset": 2048, 00:22:25.563 "data_size": 63488 00:22:25.563 } 00:22:25.563 ] 00:22:25.563 }' 00:22:25.563 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.563 00:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.176 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:26.176 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:26.176 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:26.176 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:26.176 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:26.176 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:26.176 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:26.176 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:26.435 [2024-07-25 00:48:48.972503] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:26.435 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:26.435 "name": "Existed_Raid", 00:22:26.435 "aliases": [ 00:22:26.435 "0216f818-1f52-419a-ab3d-e80bce049eb0" 00:22:26.435 ], 00:22:26.435 "product_name": "Raid Volume", 00:22:26.435 "block_size": 512, 00:22:26.435 "num_blocks": 63488, 00:22:26.435 "uuid": "0216f818-1f52-419a-ab3d-e80bce049eb0", 00:22:26.435 "assigned_rate_limits": { 00:22:26.435 "rw_ios_per_sec": 0, 00:22:26.435 "rw_mbytes_per_sec": 0, 00:22:26.435 "r_mbytes_per_sec": 0, 00:22:26.435 "w_mbytes_per_sec": 0 00:22:26.435 }, 00:22:26.435 "claimed": false, 00:22:26.435 "zoned": false, 00:22:26.435 "supported_io_types": { 00:22:26.435 "read": true, 00:22:26.435 "write": true, 00:22:26.435 "unmap": false, 00:22:26.435 "flush": false, 00:22:26.435 "reset": true, 00:22:26.435 "nvme_admin": false, 00:22:26.435 "nvme_io": false, 00:22:26.435 "nvme_io_md": false, 00:22:26.435 "write_zeroes": true, 00:22:26.435 "zcopy": false, 00:22:26.435 "get_zone_info": false, 00:22:26.435 "zone_management": false, 00:22:26.435 "zone_append": false, 00:22:26.435 "compare": false, 00:22:26.435 "compare_and_write": false, 00:22:26.435 "abort": false, 00:22:26.435 "seek_hole": false, 00:22:26.435 "seek_data": false, 00:22:26.435 "copy": false, 00:22:26.435 "nvme_iov_md": false 00:22:26.435 }, 00:22:26.435 "memory_domains": [ 00:22:26.435 { 00:22:26.435 "dma_device_id": "system", 00:22:26.435 "dma_device_type": 1 00:22:26.435 }, 00:22:26.435 { 00:22:26.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.435 "dma_device_type": 2 00:22:26.435 }, 00:22:26.435 { 00:22:26.435 "dma_device_id": "system", 00:22:26.435 "dma_device_type": 1 00:22:26.435 }, 00:22:26.435 { 00:22:26.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.435 "dma_device_type": 2 00:22:26.435 }, 00:22:26.435 { 00:22:26.435 "dma_device_id": "system", 00:22:26.435 "dma_device_type": 1 00:22:26.435 }, 00:22:26.435 { 00:22:26.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.435 "dma_device_type": 2 00:22:26.435 } 00:22:26.435 ], 00:22:26.435 "driver_specific": { 00:22:26.435 "raid": { 00:22:26.435 "uuid": "0216f818-1f52-419a-ab3d-e80bce049eb0", 00:22:26.435 "strip_size_kb": 0, 00:22:26.435 "state": "online", 00:22:26.435 "raid_level": "raid1", 00:22:26.435 "superblock": true, 00:22:26.435 "num_base_bdevs": 3, 00:22:26.435 "num_base_bdevs_discovered": 3, 00:22:26.435 "num_base_bdevs_operational": 3, 00:22:26.435 "base_bdevs_list": [ 00:22:26.435 { 00:22:26.435 "name": "BaseBdev1", 00:22:26.435 "uuid": "62cafa84-7c2a-45f3-bfe6-f20b2c078fad", 00:22:26.435 "is_configured": true, 00:22:26.435 "data_offset": 2048, 00:22:26.435 "data_size": 63488 00:22:26.435 }, 00:22:26.435 { 00:22:26.435 "name": "BaseBdev2", 00:22:26.435 "uuid": "eaad7db5-9501-4dd2-9b0c-4d44124839cf", 00:22:26.435 "is_configured": true, 00:22:26.435 "data_offset": 2048, 00:22:26.435 "data_size": 63488 00:22:26.435 }, 00:22:26.435 { 00:22:26.435 "name": "BaseBdev3", 00:22:26.435 "uuid": "af2d60dd-224f-44e4-8038-9bce9e85103a", 00:22:26.435 "is_configured": true, 00:22:26.435 "data_offset": 2048, 00:22:26.435 "data_size": 63488 00:22:26.435 } 00:22:26.435 ] 00:22:26.435 } 00:22:26.435 } 00:22:26.435 }' 00:22:26.435 00:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:26.435 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:26.435 BaseBdev2 00:22:26.435 BaseBdev3' 00:22:26.435 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:26.435 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:26.435 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:26.695 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:26.695 "name": "BaseBdev1", 00:22:26.695 "aliases": [ 00:22:26.695 "62cafa84-7c2a-45f3-bfe6-f20b2c078fad" 00:22:26.695 ], 00:22:26.695 "product_name": "Malloc disk", 00:22:26.695 "block_size": 512, 00:22:26.695 "num_blocks": 65536, 00:22:26.695 "uuid": "62cafa84-7c2a-45f3-bfe6-f20b2c078fad", 00:22:26.695 "assigned_rate_limits": { 00:22:26.695 "rw_ios_per_sec": 0, 00:22:26.695 "rw_mbytes_per_sec": 0, 00:22:26.695 "r_mbytes_per_sec": 0, 00:22:26.695 "w_mbytes_per_sec": 0 00:22:26.695 }, 00:22:26.695 "claimed": true, 00:22:26.695 "claim_type": "exclusive_write", 00:22:26.695 "zoned": false, 00:22:26.695 "supported_io_types": { 00:22:26.695 "read": true, 00:22:26.695 "write": true, 00:22:26.695 "unmap": true, 00:22:26.695 "flush": true, 00:22:26.695 "reset": true, 00:22:26.695 "nvme_admin": false, 00:22:26.695 "nvme_io": false, 00:22:26.695 "nvme_io_md": false, 00:22:26.695 "write_zeroes": true, 00:22:26.695 "zcopy": true, 00:22:26.695 "get_zone_info": false, 00:22:26.695 "zone_management": false, 00:22:26.695 "zone_append": false, 00:22:26.695 "compare": false, 00:22:26.695 "compare_and_write": false, 00:22:26.695 "abort": true, 00:22:26.695 "seek_hole": false, 00:22:26.695 "seek_data": false, 00:22:26.695 "copy": true, 00:22:26.695 "nvme_iov_md": false 00:22:26.695 }, 00:22:26.695 "memory_domains": [ 00:22:26.695 { 00:22:26.695 "dma_device_id": "system", 00:22:26.695 "dma_device_type": 1 00:22:26.695 }, 00:22:26.695 { 00:22:26.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.695 "dma_device_type": 2 00:22:26.695 } 00:22:26.695 ], 00:22:26.695 "driver_specific": {} 00:22:26.695 }' 00:22:26.695 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.954 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.954 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:26.954 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.954 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.954 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:26.954 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.954 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.954 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:26.954 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.214 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.214 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:27.214 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:27.214 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:27.214 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:27.473 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:27.473 "name": "BaseBdev2", 00:22:27.473 "aliases": [ 00:22:27.473 "eaad7db5-9501-4dd2-9b0c-4d44124839cf" 00:22:27.473 ], 00:22:27.473 "product_name": "Malloc disk", 00:22:27.473 "block_size": 512, 00:22:27.473 "num_blocks": 65536, 00:22:27.473 "uuid": "eaad7db5-9501-4dd2-9b0c-4d44124839cf", 00:22:27.473 "assigned_rate_limits": { 00:22:27.473 "rw_ios_per_sec": 0, 00:22:27.473 "rw_mbytes_per_sec": 0, 00:22:27.473 "r_mbytes_per_sec": 0, 00:22:27.473 "w_mbytes_per_sec": 0 00:22:27.473 }, 00:22:27.473 "claimed": true, 00:22:27.473 "claim_type": "exclusive_write", 00:22:27.473 "zoned": false, 00:22:27.473 "supported_io_types": { 00:22:27.473 "read": true, 00:22:27.473 "write": true, 00:22:27.473 "unmap": true, 00:22:27.473 "flush": true, 00:22:27.473 "reset": true, 00:22:27.473 "nvme_admin": false, 00:22:27.473 "nvme_io": false, 00:22:27.473 "nvme_io_md": false, 00:22:27.473 "write_zeroes": true, 00:22:27.473 "zcopy": true, 00:22:27.473 "get_zone_info": false, 00:22:27.473 "zone_management": false, 00:22:27.473 "zone_append": false, 00:22:27.473 "compare": false, 00:22:27.473 "compare_and_write": false, 00:22:27.473 "abort": true, 00:22:27.473 "seek_hole": false, 00:22:27.473 "seek_data": false, 00:22:27.473 "copy": true, 00:22:27.473 "nvme_iov_md": false 00:22:27.473 }, 00:22:27.473 "memory_domains": [ 00:22:27.473 { 00:22:27.473 "dma_device_id": "system", 00:22:27.473 "dma_device_type": 1 00:22:27.473 }, 00:22:27.473 { 00:22:27.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.473 "dma_device_type": 2 00:22:27.473 } 00:22:27.473 ], 00:22:27.473 "driver_specific": {} 00:22:27.473 }' 00:22:27.473 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.473 00:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.473 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:27.473 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.473 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.473 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:27.473 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.732 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.732 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:27.732 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.732 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.732 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:27.732 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:27.732 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:27.732 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:27.992 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:27.992 "name": "BaseBdev3", 00:22:27.992 "aliases": [ 00:22:27.992 "af2d60dd-224f-44e4-8038-9bce9e85103a" 00:22:27.992 ], 00:22:27.992 "product_name": "Malloc disk", 00:22:27.992 "block_size": 512, 00:22:27.992 "num_blocks": 65536, 00:22:27.992 "uuid": "af2d60dd-224f-44e4-8038-9bce9e85103a", 00:22:27.992 "assigned_rate_limits": { 00:22:27.992 "rw_ios_per_sec": 0, 00:22:27.992 "rw_mbytes_per_sec": 0, 00:22:27.992 "r_mbytes_per_sec": 0, 00:22:27.992 "w_mbytes_per_sec": 0 00:22:27.992 }, 00:22:27.992 "claimed": true, 00:22:27.992 "claim_type": "exclusive_write", 00:22:27.992 "zoned": false, 00:22:27.992 "supported_io_types": { 00:22:27.992 "read": true, 00:22:27.992 "write": true, 00:22:27.992 "unmap": true, 00:22:27.992 "flush": true, 00:22:27.992 "reset": true, 00:22:27.992 "nvme_admin": false, 00:22:27.992 "nvme_io": false, 00:22:27.992 "nvme_io_md": false, 00:22:27.992 "write_zeroes": true, 00:22:27.992 "zcopy": true, 00:22:27.992 "get_zone_info": false, 00:22:27.992 "zone_management": false, 00:22:27.992 "zone_append": false, 00:22:27.992 "compare": false, 00:22:27.992 "compare_and_write": false, 00:22:27.992 "abort": true, 00:22:27.992 "seek_hole": false, 00:22:27.992 "seek_data": false, 00:22:27.992 "copy": true, 00:22:27.992 "nvme_iov_md": false 00:22:27.992 }, 00:22:27.992 "memory_domains": [ 00:22:27.992 { 00:22:27.992 "dma_device_id": "system", 00:22:27.992 "dma_device_type": 1 00:22:27.992 }, 00:22:27.992 { 00:22:27.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.992 "dma_device_type": 2 00:22:27.992 } 00:22:27.992 ], 00:22:27.992 "driver_specific": {} 00:22:27.992 }' 00:22:27.992 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.992 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.251 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:28.251 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.251 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.251 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:28.251 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.251 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.251 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:28.251 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.251 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.510 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:28.510 00:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:28.769 [2024-07-25 00:48:51.190970] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:28.769 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:28.769 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:28.769 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:28.769 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:22:28.769 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:28.769 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:28.769 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.769 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:28.769 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:28.769 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:28.770 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:28.770 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.770 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.770 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.770 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.770 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.770 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.029 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.029 "name": "Existed_Raid", 00:22:29.029 "uuid": "0216f818-1f52-419a-ab3d-e80bce049eb0", 00:22:29.029 "strip_size_kb": 0, 00:22:29.029 "state": "online", 00:22:29.029 "raid_level": "raid1", 00:22:29.029 "superblock": true, 00:22:29.029 "num_base_bdevs": 3, 00:22:29.029 "num_base_bdevs_discovered": 2, 00:22:29.029 "num_base_bdevs_operational": 2, 00:22:29.029 "base_bdevs_list": [ 00:22:29.029 { 00:22:29.029 "name": null, 00:22:29.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.029 "is_configured": false, 00:22:29.029 "data_offset": 2048, 00:22:29.029 "data_size": 63488 00:22:29.029 }, 00:22:29.029 { 00:22:29.029 "name": "BaseBdev2", 00:22:29.029 "uuid": "eaad7db5-9501-4dd2-9b0c-4d44124839cf", 00:22:29.029 "is_configured": true, 00:22:29.029 "data_offset": 2048, 00:22:29.029 "data_size": 63488 00:22:29.029 }, 00:22:29.029 { 00:22:29.029 "name": "BaseBdev3", 00:22:29.029 "uuid": "af2d60dd-224f-44e4-8038-9bce9e85103a", 00:22:29.029 "is_configured": true, 00:22:29.029 "data_offset": 2048, 00:22:29.029 "data_size": 63488 00:22:29.029 } 00:22:29.029 ] 00:22:29.029 }' 00:22:29.029 00:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.029 00:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.597 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:29.597 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:29.597 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.597 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:29.856 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:29.856 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:29.856 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:30.116 [2024-07-25 00:48:52.613611] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:30.116 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:30.116 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:30.116 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.116 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:30.375 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:30.375 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:30.375 00:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:30.634 [2024-07-25 00:48:53.143138] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:30.634 [2024-07-25 00:48:53.143421] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:30.634 [2024-07-25 00:48:53.252031] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:30.634 [2024-07-25 00:48:53.252267] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:30.634 [2024-07-25 00:48:53.252389] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:22:30.634 00:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:30.634 00:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:30.634 00:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.634 00:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:30.898 00:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:30.898 00:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:30.898 00:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:22:30.898 00:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:30.898 00:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:30.898 00:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:31.160 BaseBdev2 00:22:31.160 00:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:31.160 00:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:31.160 00:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:31.160 00:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:31.160 00:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:31.160 00:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:31.160 00:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:31.419 00:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:31.678 [ 00:22:31.678 { 00:22:31.678 "name": "BaseBdev2", 00:22:31.678 "aliases": [ 00:22:31.678 "e677c39e-c59c-42c3-a932-9d480762aa97" 00:22:31.678 ], 00:22:31.678 "product_name": "Malloc disk", 00:22:31.678 "block_size": 512, 00:22:31.678 "num_blocks": 65536, 00:22:31.678 "uuid": "e677c39e-c59c-42c3-a932-9d480762aa97", 00:22:31.678 "assigned_rate_limits": { 00:22:31.678 "rw_ios_per_sec": 0, 00:22:31.678 "rw_mbytes_per_sec": 0, 00:22:31.678 "r_mbytes_per_sec": 0, 00:22:31.678 "w_mbytes_per_sec": 0 00:22:31.678 }, 00:22:31.678 "claimed": false, 00:22:31.678 "zoned": false, 00:22:31.678 "supported_io_types": { 00:22:31.678 "read": true, 00:22:31.678 "write": true, 00:22:31.678 "unmap": true, 00:22:31.678 "flush": true, 00:22:31.678 "reset": true, 00:22:31.678 "nvme_admin": false, 00:22:31.678 "nvme_io": false, 00:22:31.678 "nvme_io_md": false, 00:22:31.678 "write_zeroes": true, 00:22:31.678 "zcopy": true, 00:22:31.678 "get_zone_info": false, 00:22:31.678 "zone_management": false, 00:22:31.678 "zone_append": false, 00:22:31.678 "compare": false, 00:22:31.678 "compare_and_write": false, 00:22:31.678 "abort": true, 00:22:31.678 "seek_hole": false, 00:22:31.678 "seek_data": false, 00:22:31.678 "copy": true, 00:22:31.678 "nvme_iov_md": false 00:22:31.678 }, 00:22:31.678 "memory_domains": [ 00:22:31.678 { 00:22:31.678 "dma_device_id": "system", 00:22:31.678 "dma_device_type": 1 00:22:31.678 }, 00:22:31.678 { 00:22:31.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.678 "dma_device_type": 2 00:22:31.678 } 00:22:31.678 ], 00:22:31.678 "driver_specific": {} 00:22:31.678 } 00:22:31.678 ] 00:22:31.678 00:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:31.678 00:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:31.678 00:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:31.678 00:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:31.938 BaseBdev3 00:22:31.938 00:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:31.938 00:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:31.938 00:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:31.938 00:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:31.938 00:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:31.938 00:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:31.938 00:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:32.197 00:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:32.197 [ 00:22:32.197 { 00:22:32.197 "name": "BaseBdev3", 00:22:32.197 "aliases": [ 00:22:32.197 "69628e5a-aed3-4dac-b46b-1800a5535b7c" 00:22:32.197 ], 00:22:32.197 "product_name": "Malloc disk", 00:22:32.197 "block_size": 512, 00:22:32.197 "num_blocks": 65536, 00:22:32.197 "uuid": "69628e5a-aed3-4dac-b46b-1800a5535b7c", 00:22:32.197 "assigned_rate_limits": { 00:22:32.197 "rw_ios_per_sec": 0, 00:22:32.197 "rw_mbytes_per_sec": 0, 00:22:32.197 "r_mbytes_per_sec": 0, 00:22:32.197 "w_mbytes_per_sec": 0 00:22:32.197 }, 00:22:32.197 "claimed": false, 00:22:32.197 "zoned": false, 00:22:32.197 "supported_io_types": { 00:22:32.197 "read": true, 00:22:32.197 "write": true, 00:22:32.197 "unmap": true, 00:22:32.197 "flush": true, 00:22:32.197 "reset": true, 00:22:32.197 "nvme_admin": false, 00:22:32.197 "nvme_io": false, 00:22:32.197 "nvme_io_md": false, 00:22:32.197 "write_zeroes": true, 00:22:32.197 "zcopy": true, 00:22:32.197 "get_zone_info": false, 00:22:32.197 "zone_management": false, 00:22:32.197 "zone_append": false, 00:22:32.197 "compare": false, 00:22:32.197 "compare_and_write": false, 00:22:32.197 "abort": true, 00:22:32.197 "seek_hole": false, 00:22:32.197 "seek_data": false, 00:22:32.197 "copy": true, 00:22:32.197 "nvme_iov_md": false 00:22:32.197 }, 00:22:32.197 "memory_domains": [ 00:22:32.197 { 00:22:32.197 "dma_device_id": "system", 00:22:32.197 "dma_device_type": 1 00:22:32.197 }, 00:22:32.197 { 00:22:32.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.197 "dma_device_type": 2 00:22:32.197 } 00:22:32.197 ], 00:22:32.197 "driver_specific": {} 00:22:32.197 } 00:22:32.197 ] 00:22:32.457 00:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:32.457 00:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:32.457 00:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:32.457 00:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:32.457 [2024-07-25 00:48:55.012631] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:32.457 [2024-07-25 00:48:55.012931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:32.457 [2024-07-25 00:48:55.013035] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:32.457 [2024-07-25 00:48:55.015034] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.457 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.717 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.717 "name": "Existed_Raid", 00:22:32.717 "uuid": "976ab561-ce70-4076-87a8-b7816b30df82", 00:22:32.717 "strip_size_kb": 0, 00:22:32.717 "state": "configuring", 00:22:32.717 "raid_level": "raid1", 00:22:32.717 "superblock": true, 00:22:32.717 "num_base_bdevs": 3, 00:22:32.717 "num_base_bdevs_discovered": 2, 00:22:32.717 "num_base_bdevs_operational": 3, 00:22:32.717 "base_bdevs_list": [ 00:22:32.717 { 00:22:32.717 "name": "BaseBdev1", 00:22:32.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.717 "is_configured": false, 00:22:32.717 "data_offset": 0, 00:22:32.717 "data_size": 0 00:22:32.717 }, 00:22:32.717 { 00:22:32.717 "name": "BaseBdev2", 00:22:32.717 "uuid": "e677c39e-c59c-42c3-a932-9d480762aa97", 00:22:32.717 "is_configured": true, 00:22:32.717 "data_offset": 2048, 00:22:32.717 "data_size": 63488 00:22:32.717 }, 00:22:32.717 { 00:22:32.717 "name": "BaseBdev3", 00:22:32.717 "uuid": "69628e5a-aed3-4dac-b46b-1800a5535b7c", 00:22:32.717 "is_configured": true, 00:22:32.717 "data_offset": 2048, 00:22:32.717 "data_size": 63488 00:22:32.717 } 00:22:32.717 ] 00:22:32.717 }' 00:22:32.717 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.717 00:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.283 00:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:33.542 [2024-07-25 00:48:56.160824] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.542 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.802 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.802 "name": "Existed_Raid", 00:22:33.802 "uuid": "976ab561-ce70-4076-87a8-b7816b30df82", 00:22:33.802 "strip_size_kb": 0, 00:22:33.802 "state": "configuring", 00:22:33.802 "raid_level": "raid1", 00:22:33.802 "superblock": true, 00:22:33.802 "num_base_bdevs": 3, 00:22:33.802 "num_base_bdevs_discovered": 1, 00:22:33.802 "num_base_bdevs_operational": 3, 00:22:33.802 "base_bdevs_list": [ 00:22:33.802 { 00:22:33.802 "name": "BaseBdev1", 00:22:33.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.802 "is_configured": false, 00:22:33.802 "data_offset": 0, 00:22:33.802 "data_size": 0 00:22:33.802 }, 00:22:33.802 { 00:22:33.802 "name": null, 00:22:33.802 "uuid": "e677c39e-c59c-42c3-a932-9d480762aa97", 00:22:33.802 "is_configured": false, 00:22:33.802 "data_offset": 2048, 00:22:33.802 "data_size": 63488 00:22:33.802 }, 00:22:33.802 { 00:22:33.802 "name": "BaseBdev3", 00:22:33.802 "uuid": "69628e5a-aed3-4dac-b46b-1800a5535b7c", 00:22:33.802 "is_configured": true, 00:22:33.802 "data_offset": 2048, 00:22:33.802 "data_size": 63488 00:22:33.802 } 00:22:33.802 ] 00:22:33.802 }' 00:22:33.802 00:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.802 00:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.738 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.738 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:34.738 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:34.738 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:34.997 [2024-07-25 00:48:57.423934] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:34.997 BaseBdev1 00:22:34.997 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:34.997 00:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:34.997 00:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:34.997 00:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:34.997 00:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:34.997 00:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:34.997 00:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:34.997 00:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:35.256 [ 00:22:35.256 { 00:22:35.256 "name": "BaseBdev1", 00:22:35.256 "aliases": [ 00:22:35.256 "6f11211f-3f18-40af-85d7-5f2ba96e9539" 00:22:35.256 ], 00:22:35.256 "product_name": "Malloc disk", 00:22:35.256 "block_size": 512, 00:22:35.256 "num_blocks": 65536, 00:22:35.256 "uuid": "6f11211f-3f18-40af-85d7-5f2ba96e9539", 00:22:35.256 "assigned_rate_limits": { 00:22:35.256 "rw_ios_per_sec": 0, 00:22:35.256 "rw_mbytes_per_sec": 0, 00:22:35.256 "r_mbytes_per_sec": 0, 00:22:35.256 "w_mbytes_per_sec": 0 00:22:35.256 }, 00:22:35.256 "claimed": true, 00:22:35.256 "claim_type": "exclusive_write", 00:22:35.256 "zoned": false, 00:22:35.256 "supported_io_types": { 00:22:35.256 "read": true, 00:22:35.256 "write": true, 00:22:35.256 "unmap": true, 00:22:35.256 "flush": true, 00:22:35.256 "reset": true, 00:22:35.256 "nvme_admin": false, 00:22:35.256 "nvme_io": false, 00:22:35.256 "nvme_io_md": false, 00:22:35.256 "write_zeroes": true, 00:22:35.256 "zcopy": true, 00:22:35.256 "get_zone_info": false, 00:22:35.256 "zone_management": false, 00:22:35.256 "zone_append": false, 00:22:35.256 "compare": false, 00:22:35.256 "compare_and_write": false, 00:22:35.256 "abort": true, 00:22:35.256 "seek_hole": false, 00:22:35.256 "seek_data": false, 00:22:35.256 "copy": true, 00:22:35.256 "nvme_iov_md": false 00:22:35.256 }, 00:22:35.256 "memory_domains": [ 00:22:35.256 { 00:22:35.256 "dma_device_id": "system", 00:22:35.256 "dma_device_type": 1 00:22:35.256 }, 00:22:35.256 { 00:22:35.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.256 "dma_device_type": 2 00:22:35.256 } 00:22:35.256 ], 00:22:35.256 "driver_specific": {} 00:22:35.256 } 00:22:35.256 ] 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.256 00:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.515 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:35.515 "name": "Existed_Raid", 00:22:35.515 "uuid": "976ab561-ce70-4076-87a8-b7816b30df82", 00:22:35.515 "strip_size_kb": 0, 00:22:35.515 "state": "configuring", 00:22:35.515 "raid_level": "raid1", 00:22:35.515 "superblock": true, 00:22:35.515 "num_base_bdevs": 3, 00:22:35.515 "num_base_bdevs_discovered": 2, 00:22:35.515 "num_base_bdevs_operational": 3, 00:22:35.515 "base_bdevs_list": [ 00:22:35.515 { 00:22:35.515 "name": "BaseBdev1", 00:22:35.515 "uuid": "6f11211f-3f18-40af-85d7-5f2ba96e9539", 00:22:35.515 "is_configured": true, 00:22:35.515 "data_offset": 2048, 00:22:35.515 "data_size": 63488 00:22:35.515 }, 00:22:35.515 { 00:22:35.515 "name": null, 00:22:35.515 "uuid": "e677c39e-c59c-42c3-a932-9d480762aa97", 00:22:35.515 "is_configured": false, 00:22:35.515 "data_offset": 2048, 00:22:35.515 "data_size": 63488 00:22:35.515 }, 00:22:35.515 { 00:22:35.515 "name": "BaseBdev3", 00:22:35.515 "uuid": "69628e5a-aed3-4dac-b46b-1800a5535b7c", 00:22:35.515 "is_configured": true, 00:22:35.515 "data_offset": 2048, 00:22:35.515 "data_size": 63488 00:22:35.515 } 00:22:35.515 ] 00:22:35.515 }' 00:22:35.515 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:35.515 00:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.082 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.082 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:36.082 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:36.082 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:36.341 [2024-07-25 00:48:58.920206] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:36.341 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:36.341 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:36.342 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:36.342 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.342 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.342 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:36.342 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.342 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.342 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.342 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.342 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.342 00:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.600 00:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.600 "name": "Existed_Raid", 00:22:36.600 "uuid": "976ab561-ce70-4076-87a8-b7816b30df82", 00:22:36.600 "strip_size_kb": 0, 00:22:36.600 "state": "configuring", 00:22:36.600 "raid_level": "raid1", 00:22:36.600 "superblock": true, 00:22:36.600 "num_base_bdevs": 3, 00:22:36.600 "num_base_bdevs_discovered": 1, 00:22:36.600 "num_base_bdevs_operational": 3, 00:22:36.600 "base_bdevs_list": [ 00:22:36.600 { 00:22:36.600 "name": "BaseBdev1", 00:22:36.600 "uuid": "6f11211f-3f18-40af-85d7-5f2ba96e9539", 00:22:36.600 "is_configured": true, 00:22:36.600 "data_offset": 2048, 00:22:36.600 "data_size": 63488 00:22:36.600 }, 00:22:36.600 { 00:22:36.600 "name": null, 00:22:36.600 "uuid": "e677c39e-c59c-42c3-a932-9d480762aa97", 00:22:36.600 "is_configured": false, 00:22:36.600 "data_offset": 2048, 00:22:36.600 "data_size": 63488 00:22:36.600 }, 00:22:36.600 { 00:22:36.600 "name": null, 00:22:36.600 "uuid": "69628e5a-aed3-4dac-b46b-1800a5535b7c", 00:22:36.600 "is_configured": false, 00:22:36.600 "data_offset": 2048, 00:22:36.601 "data_size": 63488 00:22:36.601 } 00:22:36.601 ] 00:22:36.601 }' 00:22:36.601 00:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.601 00:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.168 00:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:37.168 00:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.427 00:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:37.427 00:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:37.686 [2024-07-25 00:49:00.204440] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.686 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.946 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:37.946 "name": "Existed_Raid", 00:22:37.946 "uuid": "976ab561-ce70-4076-87a8-b7816b30df82", 00:22:37.946 "strip_size_kb": 0, 00:22:37.946 "state": "configuring", 00:22:37.946 "raid_level": "raid1", 00:22:37.946 "superblock": true, 00:22:37.946 "num_base_bdevs": 3, 00:22:37.946 "num_base_bdevs_discovered": 2, 00:22:37.946 "num_base_bdevs_operational": 3, 00:22:37.946 "base_bdevs_list": [ 00:22:37.946 { 00:22:37.946 "name": "BaseBdev1", 00:22:37.946 "uuid": "6f11211f-3f18-40af-85d7-5f2ba96e9539", 00:22:37.946 "is_configured": true, 00:22:37.946 "data_offset": 2048, 00:22:37.946 "data_size": 63488 00:22:37.946 }, 00:22:37.946 { 00:22:37.946 "name": null, 00:22:37.946 "uuid": "e677c39e-c59c-42c3-a932-9d480762aa97", 00:22:37.946 "is_configured": false, 00:22:37.946 "data_offset": 2048, 00:22:37.946 "data_size": 63488 00:22:37.946 }, 00:22:37.946 { 00:22:37.946 "name": "BaseBdev3", 00:22:37.946 "uuid": "69628e5a-aed3-4dac-b46b-1800a5535b7c", 00:22:37.946 "is_configured": true, 00:22:37.946 "data_offset": 2048, 00:22:37.946 "data_size": 63488 00:22:37.946 } 00:22:37.946 ] 00:22:37.946 }' 00:22:37.946 00:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:37.946 00:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.514 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.514 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:38.773 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:38.773 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:38.773 [2024-07-25 00:49:01.424700] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.031 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.290 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:39.290 "name": "Existed_Raid", 00:22:39.290 "uuid": "976ab561-ce70-4076-87a8-b7816b30df82", 00:22:39.290 "strip_size_kb": 0, 00:22:39.290 "state": "configuring", 00:22:39.290 "raid_level": "raid1", 00:22:39.290 "superblock": true, 00:22:39.290 "num_base_bdevs": 3, 00:22:39.290 "num_base_bdevs_discovered": 1, 00:22:39.290 "num_base_bdevs_operational": 3, 00:22:39.290 "base_bdevs_list": [ 00:22:39.290 { 00:22:39.290 "name": null, 00:22:39.290 "uuid": "6f11211f-3f18-40af-85d7-5f2ba96e9539", 00:22:39.290 "is_configured": false, 00:22:39.290 "data_offset": 2048, 00:22:39.290 "data_size": 63488 00:22:39.290 }, 00:22:39.290 { 00:22:39.290 "name": null, 00:22:39.290 "uuid": "e677c39e-c59c-42c3-a932-9d480762aa97", 00:22:39.290 "is_configured": false, 00:22:39.290 "data_offset": 2048, 00:22:39.290 "data_size": 63488 00:22:39.290 }, 00:22:39.290 { 00:22:39.290 "name": "BaseBdev3", 00:22:39.290 "uuid": "69628e5a-aed3-4dac-b46b-1800a5535b7c", 00:22:39.290 "is_configured": true, 00:22:39.290 "data_offset": 2048, 00:22:39.290 "data_size": 63488 00:22:39.290 } 00:22:39.290 ] 00:22:39.290 }' 00:22:39.290 00:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:39.290 00:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.857 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:39.857 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.116 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:40.116 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:40.414 [2024-07-25 00:49:02.771249] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.414 00:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.414 00:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.414 "name": "Existed_Raid", 00:22:40.414 "uuid": "976ab561-ce70-4076-87a8-b7816b30df82", 00:22:40.414 "strip_size_kb": 0, 00:22:40.414 "state": "configuring", 00:22:40.414 "raid_level": "raid1", 00:22:40.414 "superblock": true, 00:22:40.414 "num_base_bdevs": 3, 00:22:40.414 "num_base_bdevs_discovered": 2, 00:22:40.414 "num_base_bdevs_operational": 3, 00:22:40.414 "base_bdevs_list": [ 00:22:40.414 { 00:22:40.414 "name": null, 00:22:40.414 "uuid": "6f11211f-3f18-40af-85d7-5f2ba96e9539", 00:22:40.414 "is_configured": false, 00:22:40.414 "data_offset": 2048, 00:22:40.414 "data_size": 63488 00:22:40.414 }, 00:22:40.414 { 00:22:40.414 "name": "BaseBdev2", 00:22:40.414 "uuid": "e677c39e-c59c-42c3-a932-9d480762aa97", 00:22:40.414 "is_configured": true, 00:22:40.414 "data_offset": 2048, 00:22:40.414 "data_size": 63488 00:22:40.414 }, 00:22:40.414 { 00:22:40.414 "name": "BaseBdev3", 00:22:40.414 "uuid": "69628e5a-aed3-4dac-b46b-1800a5535b7c", 00:22:40.414 "is_configured": true, 00:22:40.414 "data_offset": 2048, 00:22:40.414 "data_size": 63488 00:22:40.414 } 00:22:40.414 ] 00:22:40.414 }' 00:22:40.414 00:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.414 00:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.982 00:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.982 00:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:41.240 00:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:41.240 00:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.240 00:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:41.499 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6f11211f-3f18-40af-85d7-5f2ba96e9539 00:22:41.759 [2024-07-25 00:49:04.209865] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:41.759 [2024-07-25 00:49:04.210312] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:22:41.759 [2024-07-25 00:49:04.210445] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:41.759 [2024-07-25 00:49:04.210611] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:41.759 [2024-07-25 00:49:04.211118] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:22:41.759 [2024-07-25 00:49:04.211240] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:22:41.759 [2024-07-25 00:49:04.211499] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.759 NewBaseBdev 00:22:41.759 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:41.759 00:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:41.759 00:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:41.759 00:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:41.759 00:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:41.759 00:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:41.759 00:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:42.018 00:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:42.278 [ 00:22:42.278 { 00:22:42.278 "name": "NewBaseBdev", 00:22:42.278 "aliases": [ 00:22:42.278 "6f11211f-3f18-40af-85d7-5f2ba96e9539" 00:22:42.278 ], 00:22:42.278 "product_name": "Malloc disk", 00:22:42.278 "block_size": 512, 00:22:42.278 "num_blocks": 65536, 00:22:42.278 "uuid": "6f11211f-3f18-40af-85d7-5f2ba96e9539", 00:22:42.278 "assigned_rate_limits": { 00:22:42.278 "rw_ios_per_sec": 0, 00:22:42.278 "rw_mbytes_per_sec": 0, 00:22:42.278 "r_mbytes_per_sec": 0, 00:22:42.278 "w_mbytes_per_sec": 0 00:22:42.278 }, 00:22:42.278 "claimed": true, 00:22:42.278 "claim_type": "exclusive_write", 00:22:42.278 "zoned": false, 00:22:42.278 "supported_io_types": { 00:22:42.278 "read": true, 00:22:42.278 "write": true, 00:22:42.278 "unmap": true, 00:22:42.278 "flush": true, 00:22:42.278 "reset": true, 00:22:42.278 "nvme_admin": false, 00:22:42.278 "nvme_io": false, 00:22:42.278 "nvme_io_md": false, 00:22:42.278 "write_zeroes": true, 00:22:42.278 "zcopy": true, 00:22:42.278 "get_zone_info": false, 00:22:42.278 "zone_management": false, 00:22:42.278 "zone_append": false, 00:22:42.278 "compare": false, 00:22:42.278 "compare_and_write": false, 00:22:42.278 "abort": true, 00:22:42.278 "seek_hole": false, 00:22:42.278 "seek_data": false, 00:22:42.278 "copy": true, 00:22:42.278 "nvme_iov_md": false 00:22:42.278 }, 00:22:42.278 "memory_domains": [ 00:22:42.278 { 00:22:42.278 "dma_device_id": "system", 00:22:42.278 "dma_device_type": 1 00:22:42.278 }, 00:22:42.278 { 00:22:42.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.278 "dma_device_type": 2 00:22:42.278 } 00:22:42.278 ], 00:22:42.278 "driver_specific": {} 00:22:42.278 } 00:22:42.278 ] 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.278 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.537 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.538 "name": "Existed_Raid", 00:22:42.538 "uuid": "976ab561-ce70-4076-87a8-b7816b30df82", 00:22:42.538 "strip_size_kb": 0, 00:22:42.538 "state": "online", 00:22:42.538 "raid_level": "raid1", 00:22:42.538 "superblock": true, 00:22:42.538 "num_base_bdevs": 3, 00:22:42.538 "num_base_bdevs_discovered": 3, 00:22:42.538 "num_base_bdevs_operational": 3, 00:22:42.538 "base_bdevs_list": [ 00:22:42.538 { 00:22:42.538 "name": "NewBaseBdev", 00:22:42.538 "uuid": "6f11211f-3f18-40af-85d7-5f2ba96e9539", 00:22:42.538 "is_configured": true, 00:22:42.538 "data_offset": 2048, 00:22:42.538 "data_size": 63488 00:22:42.538 }, 00:22:42.538 { 00:22:42.538 "name": "BaseBdev2", 00:22:42.538 "uuid": "e677c39e-c59c-42c3-a932-9d480762aa97", 00:22:42.538 "is_configured": true, 00:22:42.538 "data_offset": 2048, 00:22:42.538 "data_size": 63488 00:22:42.538 }, 00:22:42.538 { 00:22:42.538 "name": "BaseBdev3", 00:22:42.538 "uuid": "69628e5a-aed3-4dac-b46b-1800a5535b7c", 00:22:42.538 "is_configured": true, 00:22:42.538 "data_offset": 2048, 00:22:42.538 "data_size": 63488 00:22:42.538 } 00:22:42.538 ] 00:22:42.538 }' 00:22:42.538 00:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.538 00:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.107 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:43.107 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:43.107 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:43.107 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:43.107 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:43.107 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:43.107 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:43.107 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:43.107 [2024-07-25 00:49:05.686400] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.107 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:43.107 "name": "Existed_Raid", 00:22:43.107 "aliases": [ 00:22:43.107 "976ab561-ce70-4076-87a8-b7816b30df82" 00:22:43.107 ], 00:22:43.107 "product_name": "Raid Volume", 00:22:43.107 "block_size": 512, 00:22:43.107 "num_blocks": 63488, 00:22:43.107 "uuid": "976ab561-ce70-4076-87a8-b7816b30df82", 00:22:43.107 "assigned_rate_limits": { 00:22:43.107 "rw_ios_per_sec": 0, 00:22:43.107 "rw_mbytes_per_sec": 0, 00:22:43.107 "r_mbytes_per_sec": 0, 00:22:43.107 "w_mbytes_per_sec": 0 00:22:43.107 }, 00:22:43.107 "claimed": false, 00:22:43.107 "zoned": false, 00:22:43.107 "supported_io_types": { 00:22:43.107 "read": true, 00:22:43.107 "write": true, 00:22:43.107 "unmap": false, 00:22:43.107 "flush": false, 00:22:43.107 "reset": true, 00:22:43.107 "nvme_admin": false, 00:22:43.107 "nvme_io": false, 00:22:43.107 "nvme_io_md": false, 00:22:43.107 "write_zeroes": true, 00:22:43.107 "zcopy": false, 00:22:43.107 "get_zone_info": false, 00:22:43.107 "zone_management": false, 00:22:43.107 "zone_append": false, 00:22:43.107 "compare": false, 00:22:43.107 "compare_and_write": false, 00:22:43.107 "abort": false, 00:22:43.107 "seek_hole": false, 00:22:43.107 "seek_data": false, 00:22:43.107 "copy": false, 00:22:43.107 "nvme_iov_md": false 00:22:43.107 }, 00:22:43.107 "memory_domains": [ 00:22:43.107 { 00:22:43.107 "dma_device_id": "system", 00:22:43.107 "dma_device_type": 1 00:22:43.107 }, 00:22:43.107 { 00:22:43.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.107 "dma_device_type": 2 00:22:43.107 }, 00:22:43.107 { 00:22:43.107 "dma_device_id": "system", 00:22:43.107 "dma_device_type": 1 00:22:43.107 }, 00:22:43.107 { 00:22:43.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.107 "dma_device_type": 2 00:22:43.107 }, 00:22:43.107 { 00:22:43.107 "dma_device_id": "system", 00:22:43.107 "dma_device_type": 1 00:22:43.107 }, 00:22:43.107 { 00:22:43.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.107 "dma_device_type": 2 00:22:43.107 } 00:22:43.107 ], 00:22:43.107 "driver_specific": { 00:22:43.107 "raid": { 00:22:43.107 "uuid": "976ab561-ce70-4076-87a8-b7816b30df82", 00:22:43.107 "strip_size_kb": 0, 00:22:43.107 "state": "online", 00:22:43.107 "raid_level": "raid1", 00:22:43.107 "superblock": true, 00:22:43.107 "num_base_bdevs": 3, 00:22:43.107 "num_base_bdevs_discovered": 3, 00:22:43.107 "num_base_bdevs_operational": 3, 00:22:43.107 "base_bdevs_list": [ 00:22:43.107 { 00:22:43.107 "name": "NewBaseBdev", 00:22:43.107 "uuid": "6f11211f-3f18-40af-85d7-5f2ba96e9539", 00:22:43.107 "is_configured": true, 00:22:43.107 "data_offset": 2048, 00:22:43.108 "data_size": 63488 00:22:43.108 }, 00:22:43.108 { 00:22:43.108 "name": "BaseBdev2", 00:22:43.108 "uuid": "e677c39e-c59c-42c3-a932-9d480762aa97", 00:22:43.108 "is_configured": true, 00:22:43.108 "data_offset": 2048, 00:22:43.108 "data_size": 63488 00:22:43.108 }, 00:22:43.108 { 00:22:43.108 "name": "BaseBdev3", 00:22:43.108 "uuid": "69628e5a-aed3-4dac-b46b-1800a5535b7c", 00:22:43.108 "is_configured": true, 00:22:43.108 "data_offset": 2048, 00:22:43.108 "data_size": 63488 00:22:43.108 } 00:22:43.108 ] 00:22:43.108 } 00:22:43.108 } 00:22:43.108 }' 00:22:43.108 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:43.108 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:43.108 BaseBdev2 00:22:43.108 BaseBdev3' 00:22:43.108 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:43.108 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:43.108 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:43.367 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:43.367 "name": "NewBaseBdev", 00:22:43.367 "aliases": [ 00:22:43.367 "6f11211f-3f18-40af-85d7-5f2ba96e9539" 00:22:43.367 ], 00:22:43.367 "product_name": "Malloc disk", 00:22:43.367 "block_size": 512, 00:22:43.367 "num_blocks": 65536, 00:22:43.367 "uuid": "6f11211f-3f18-40af-85d7-5f2ba96e9539", 00:22:43.367 "assigned_rate_limits": { 00:22:43.367 "rw_ios_per_sec": 0, 00:22:43.367 "rw_mbytes_per_sec": 0, 00:22:43.367 "r_mbytes_per_sec": 0, 00:22:43.367 "w_mbytes_per_sec": 0 00:22:43.367 }, 00:22:43.367 "claimed": true, 00:22:43.367 "claim_type": "exclusive_write", 00:22:43.367 "zoned": false, 00:22:43.367 "supported_io_types": { 00:22:43.367 "read": true, 00:22:43.367 "write": true, 00:22:43.367 "unmap": true, 00:22:43.367 "flush": true, 00:22:43.367 "reset": true, 00:22:43.367 "nvme_admin": false, 00:22:43.367 "nvme_io": false, 00:22:43.367 "nvme_io_md": false, 00:22:43.367 "write_zeroes": true, 00:22:43.367 "zcopy": true, 00:22:43.367 "get_zone_info": false, 00:22:43.367 "zone_management": false, 00:22:43.367 "zone_append": false, 00:22:43.367 "compare": false, 00:22:43.367 "compare_and_write": false, 00:22:43.367 "abort": true, 00:22:43.367 "seek_hole": false, 00:22:43.367 "seek_data": false, 00:22:43.367 "copy": true, 00:22:43.367 "nvme_iov_md": false 00:22:43.367 }, 00:22:43.367 "memory_domains": [ 00:22:43.367 { 00:22:43.367 "dma_device_id": "system", 00:22:43.367 "dma_device_type": 1 00:22:43.367 }, 00:22:43.367 { 00:22:43.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.367 "dma_device_type": 2 00:22:43.367 } 00:22:43.367 ], 00:22:43.367 "driver_specific": {} 00:22:43.367 }' 00:22:43.367 00:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:43.661 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:43.661 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:43.661 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:43.661 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:43.661 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:43.661 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:43.661 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:43.661 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:43.661 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:43.943 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:43.943 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:43.943 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:43.943 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:43.943 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:44.202 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:44.202 "name": "BaseBdev2", 00:22:44.202 "aliases": [ 00:22:44.202 "e677c39e-c59c-42c3-a932-9d480762aa97" 00:22:44.202 ], 00:22:44.202 "product_name": "Malloc disk", 00:22:44.202 "block_size": 512, 00:22:44.202 "num_blocks": 65536, 00:22:44.203 "uuid": "e677c39e-c59c-42c3-a932-9d480762aa97", 00:22:44.203 "assigned_rate_limits": { 00:22:44.203 "rw_ios_per_sec": 0, 00:22:44.203 "rw_mbytes_per_sec": 0, 00:22:44.203 "r_mbytes_per_sec": 0, 00:22:44.203 "w_mbytes_per_sec": 0 00:22:44.203 }, 00:22:44.203 "claimed": true, 00:22:44.203 "claim_type": "exclusive_write", 00:22:44.203 "zoned": false, 00:22:44.203 "supported_io_types": { 00:22:44.203 "read": true, 00:22:44.203 "write": true, 00:22:44.203 "unmap": true, 00:22:44.203 "flush": true, 00:22:44.203 "reset": true, 00:22:44.203 "nvme_admin": false, 00:22:44.203 "nvme_io": false, 00:22:44.203 "nvme_io_md": false, 00:22:44.203 "write_zeroes": true, 00:22:44.203 "zcopy": true, 00:22:44.203 "get_zone_info": false, 00:22:44.203 "zone_management": false, 00:22:44.203 "zone_append": false, 00:22:44.203 "compare": false, 00:22:44.203 "compare_and_write": false, 00:22:44.203 "abort": true, 00:22:44.203 "seek_hole": false, 00:22:44.203 "seek_data": false, 00:22:44.203 "copy": true, 00:22:44.203 "nvme_iov_md": false 00:22:44.203 }, 00:22:44.203 "memory_domains": [ 00:22:44.203 { 00:22:44.203 "dma_device_id": "system", 00:22:44.203 "dma_device_type": 1 00:22:44.203 }, 00:22:44.203 { 00:22:44.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.203 "dma_device_type": 2 00:22:44.203 } 00:22:44.203 ], 00:22:44.203 "driver_specific": {} 00:22:44.203 }' 00:22:44.203 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.203 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.203 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:44.203 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.203 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.203 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:44.203 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.462 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.462 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:44.462 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:44.462 00:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:44.462 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:44.462 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:44.462 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:44.462 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:44.722 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:44.722 "name": "BaseBdev3", 00:22:44.722 "aliases": [ 00:22:44.722 "69628e5a-aed3-4dac-b46b-1800a5535b7c" 00:22:44.722 ], 00:22:44.722 "product_name": "Malloc disk", 00:22:44.722 "block_size": 512, 00:22:44.722 "num_blocks": 65536, 00:22:44.722 "uuid": "69628e5a-aed3-4dac-b46b-1800a5535b7c", 00:22:44.722 "assigned_rate_limits": { 00:22:44.722 "rw_ios_per_sec": 0, 00:22:44.722 "rw_mbytes_per_sec": 0, 00:22:44.722 "r_mbytes_per_sec": 0, 00:22:44.722 "w_mbytes_per_sec": 0 00:22:44.722 }, 00:22:44.722 "claimed": true, 00:22:44.722 "claim_type": "exclusive_write", 00:22:44.722 "zoned": false, 00:22:44.722 "supported_io_types": { 00:22:44.722 "read": true, 00:22:44.722 "write": true, 00:22:44.722 "unmap": true, 00:22:44.722 "flush": true, 00:22:44.722 "reset": true, 00:22:44.722 "nvme_admin": false, 00:22:44.722 "nvme_io": false, 00:22:44.722 "nvme_io_md": false, 00:22:44.722 "write_zeroes": true, 00:22:44.722 "zcopy": true, 00:22:44.722 "get_zone_info": false, 00:22:44.722 "zone_management": false, 00:22:44.722 "zone_append": false, 00:22:44.722 "compare": false, 00:22:44.722 "compare_and_write": false, 00:22:44.722 "abort": true, 00:22:44.722 "seek_hole": false, 00:22:44.722 "seek_data": false, 00:22:44.722 "copy": true, 00:22:44.722 "nvme_iov_md": false 00:22:44.722 }, 00:22:44.722 "memory_domains": [ 00:22:44.722 { 00:22:44.722 "dma_device_id": "system", 00:22:44.722 "dma_device_type": 1 00:22:44.722 }, 00:22:44.722 { 00:22:44.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.722 "dma_device_type": 2 00:22:44.722 } 00:22:44.722 ], 00:22:44.722 "driver_specific": {} 00:22:44.722 }' 00:22:44.722 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.722 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.981 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:44.981 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.982 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.982 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:44.982 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.982 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.982 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:44.982 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:44.982 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:45.241 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:45.241 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:45.241 [2024-07-25 00:49:07.892536] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:45.241 [2024-07-25 00:49:07.892722] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.241 [2024-07-25 00:49:07.892941] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.241 [2024-07-25 00:49:07.893325] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.500 [2024-07-25 00:49:07.893447] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:22:45.500 00:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 132842 00:22:45.500 00:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 132842 ']' 00:22:45.500 00:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 132842 00:22:45.500 00:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:22:45.500 00:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:45.500 00:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132842 00:22:45.500 00:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:45.500 00:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:45.500 00:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132842' 00:22:45.500 killing process with pid 132842 00:22:45.500 00:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 132842 00:22:45.500 [2024-07-25 00:49:07.944931] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:45.500 00:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 132842 00:22:45.759 [2024-07-25 00:49:08.269767] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:47.136 00:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:47.136 00:22:47.136 real 0m28.649s 00:22:47.136 user 0m51.202s 00:22:47.136 sys 0m4.470s 00:22:47.136 ************************************ 00:22:47.136 END TEST raid_state_function_test_sb 00:22:47.136 ************************************ 00:22:47.136 00:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:47.136 00:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.136 00:49:09 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:22:47.136 00:49:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:47.136 00:49:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.136 00:49:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:47.396 ************************************ 00:22:47.396 START TEST raid_superblock_test 00:22:47.396 ************************************ 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=133808 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 133808 /var/tmp/spdk-raid.sock 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 133808 ']' 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:47.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.396 00:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.396 [2024-07-25 00:49:09.897797] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:22:47.396 [2024-07-25 00:49:09.898336] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133808 ] 00:22:47.654 [2024-07-25 00:49:10.089040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.913 [2024-07-25 00:49:10.338840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.913 [2024-07-25 00:49:10.540064] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:48.481 00:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.481 00:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:22:48.481 00:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:48.481 00:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:48.481 00:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:48.481 00:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:48.481 00:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:48.481 00:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:48.481 00:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:48.481 00:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:48.481 00:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:48.481 malloc1 00:22:48.481 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:48.740 [2024-07-25 00:49:11.282622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:48.740 [2024-07-25 00:49:11.282895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.740 [2024-07-25 00:49:11.282970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:48.740 [2024-07-25 00:49:11.283066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.740 [2024-07-25 00:49:11.285383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.740 [2024-07-25 00:49:11.285540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:48.740 pt1 00:22:48.740 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:48.740 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:48.740 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:48.740 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:48.740 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:48.740 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:48.740 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:48.740 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:48.740 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:48.999 malloc2 00:22:48.999 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:49.259 [2024-07-25 00:49:11.691941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:49.259 [2024-07-25 00:49:11.692200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.259 [2024-07-25 00:49:11.692270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:49.259 [2024-07-25 00:49:11.692375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.259 [2024-07-25 00:49:11.694902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.259 [2024-07-25 00:49:11.695073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:49.259 pt2 00:22:49.259 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:49.259 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:49.259 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:22:49.259 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:22:49.259 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:49.259 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:49.259 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:49.259 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:49.259 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:49.259 malloc3 00:22:49.259 00:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:49.826 [2024-07-25 00:49:12.171003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:49.826 [2024-07-25 00:49:12.171255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.826 [2024-07-25 00:49:12.171320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:49.826 [2024-07-25 00:49:12.171422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.826 [2024-07-25 00:49:12.173634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.826 [2024-07-25 00:49:12.173798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:49.826 pt3 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:22:49.826 [2024-07-25 00:49:12.351067] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:49.826 [2024-07-25 00:49:12.353092] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:49.826 [2024-07-25 00:49:12.353290] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:49.826 [2024-07-25 00:49:12.353499] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:22:49.826 [2024-07-25 00:49:12.353656] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:49.826 [2024-07-25 00:49:12.353809] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:49.826 [2024-07-25 00:49:12.354247] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:22:49.826 [2024-07-25 00:49:12.354373] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:22:49.826 [2024-07-25 00:49:12.354583] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.826 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.085 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.085 "name": "raid_bdev1", 00:22:50.085 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:22:50.085 "strip_size_kb": 0, 00:22:50.085 "state": "online", 00:22:50.085 "raid_level": "raid1", 00:22:50.085 "superblock": true, 00:22:50.085 "num_base_bdevs": 3, 00:22:50.085 "num_base_bdevs_discovered": 3, 00:22:50.085 "num_base_bdevs_operational": 3, 00:22:50.085 "base_bdevs_list": [ 00:22:50.085 { 00:22:50.085 "name": "pt1", 00:22:50.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:50.085 "is_configured": true, 00:22:50.085 "data_offset": 2048, 00:22:50.085 "data_size": 63488 00:22:50.085 }, 00:22:50.085 { 00:22:50.085 "name": "pt2", 00:22:50.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:50.085 "is_configured": true, 00:22:50.085 "data_offset": 2048, 00:22:50.085 "data_size": 63488 00:22:50.085 }, 00:22:50.085 { 00:22:50.085 "name": "pt3", 00:22:50.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:50.085 "is_configured": true, 00:22:50.085 "data_offset": 2048, 00:22:50.085 "data_size": 63488 00:22:50.085 } 00:22:50.085 ] 00:22:50.085 }' 00:22:50.085 00:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.085 00:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.653 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:50.653 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:50.653 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:50.653 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:50.653 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:50.653 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:50.653 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:50.653 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:50.918 [2024-07-25 00:49:13.311434] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:50.918 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:50.918 "name": "raid_bdev1", 00:22:50.918 "aliases": [ 00:22:50.918 "7e0f5089-0098-49eb-8785-1ad594dc0552" 00:22:50.918 ], 00:22:50.918 "product_name": "Raid Volume", 00:22:50.918 "block_size": 512, 00:22:50.918 "num_blocks": 63488, 00:22:50.918 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:22:50.918 "assigned_rate_limits": { 00:22:50.918 "rw_ios_per_sec": 0, 00:22:50.918 "rw_mbytes_per_sec": 0, 00:22:50.918 "r_mbytes_per_sec": 0, 00:22:50.918 "w_mbytes_per_sec": 0 00:22:50.918 }, 00:22:50.918 "claimed": false, 00:22:50.918 "zoned": false, 00:22:50.918 "supported_io_types": { 00:22:50.918 "read": true, 00:22:50.918 "write": true, 00:22:50.918 "unmap": false, 00:22:50.918 "flush": false, 00:22:50.918 "reset": true, 00:22:50.918 "nvme_admin": false, 00:22:50.918 "nvme_io": false, 00:22:50.918 "nvme_io_md": false, 00:22:50.918 "write_zeroes": true, 00:22:50.918 "zcopy": false, 00:22:50.918 "get_zone_info": false, 00:22:50.918 "zone_management": false, 00:22:50.918 "zone_append": false, 00:22:50.918 "compare": false, 00:22:50.918 "compare_and_write": false, 00:22:50.918 "abort": false, 00:22:50.918 "seek_hole": false, 00:22:50.918 "seek_data": false, 00:22:50.918 "copy": false, 00:22:50.918 "nvme_iov_md": false 00:22:50.918 }, 00:22:50.918 "memory_domains": [ 00:22:50.918 { 00:22:50.918 "dma_device_id": "system", 00:22:50.918 "dma_device_type": 1 00:22:50.918 }, 00:22:50.918 { 00:22:50.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.918 "dma_device_type": 2 00:22:50.918 }, 00:22:50.918 { 00:22:50.918 "dma_device_id": "system", 00:22:50.918 "dma_device_type": 1 00:22:50.918 }, 00:22:50.918 { 00:22:50.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.918 "dma_device_type": 2 00:22:50.918 }, 00:22:50.918 { 00:22:50.918 "dma_device_id": "system", 00:22:50.918 "dma_device_type": 1 00:22:50.918 }, 00:22:50.918 { 00:22:50.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.918 "dma_device_type": 2 00:22:50.918 } 00:22:50.918 ], 00:22:50.918 "driver_specific": { 00:22:50.918 "raid": { 00:22:50.918 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:22:50.918 "strip_size_kb": 0, 00:22:50.918 "state": "online", 00:22:50.918 "raid_level": "raid1", 00:22:50.918 "superblock": true, 00:22:50.918 "num_base_bdevs": 3, 00:22:50.918 "num_base_bdevs_discovered": 3, 00:22:50.919 "num_base_bdevs_operational": 3, 00:22:50.919 "base_bdevs_list": [ 00:22:50.919 { 00:22:50.919 "name": "pt1", 00:22:50.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:50.919 "is_configured": true, 00:22:50.919 "data_offset": 2048, 00:22:50.919 "data_size": 63488 00:22:50.919 }, 00:22:50.919 { 00:22:50.919 "name": "pt2", 00:22:50.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:50.919 "is_configured": true, 00:22:50.919 "data_offset": 2048, 00:22:50.919 "data_size": 63488 00:22:50.919 }, 00:22:50.919 { 00:22:50.919 "name": "pt3", 00:22:50.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:50.919 "is_configured": true, 00:22:50.919 "data_offset": 2048, 00:22:50.919 "data_size": 63488 00:22:50.919 } 00:22:50.919 ] 00:22:50.919 } 00:22:50.919 } 00:22:50.919 }' 00:22:50.919 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:50.919 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:50.919 pt2 00:22:50.919 pt3' 00:22:50.919 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:50.919 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:50.919 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:51.178 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:51.178 "name": "pt1", 00:22:51.178 "aliases": [ 00:22:51.178 "00000000-0000-0000-0000-000000000001" 00:22:51.178 ], 00:22:51.178 "product_name": "passthru", 00:22:51.178 "block_size": 512, 00:22:51.178 "num_blocks": 65536, 00:22:51.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:51.178 "assigned_rate_limits": { 00:22:51.178 "rw_ios_per_sec": 0, 00:22:51.178 "rw_mbytes_per_sec": 0, 00:22:51.178 "r_mbytes_per_sec": 0, 00:22:51.178 "w_mbytes_per_sec": 0 00:22:51.178 }, 00:22:51.178 "claimed": true, 00:22:51.178 "claim_type": "exclusive_write", 00:22:51.178 "zoned": false, 00:22:51.178 "supported_io_types": { 00:22:51.178 "read": true, 00:22:51.178 "write": true, 00:22:51.178 "unmap": true, 00:22:51.178 "flush": true, 00:22:51.178 "reset": true, 00:22:51.178 "nvme_admin": false, 00:22:51.178 "nvme_io": false, 00:22:51.178 "nvme_io_md": false, 00:22:51.178 "write_zeroes": true, 00:22:51.178 "zcopy": true, 00:22:51.178 "get_zone_info": false, 00:22:51.178 "zone_management": false, 00:22:51.178 "zone_append": false, 00:22:51.178 "compare": false, 00:22:51.178 "compare_and_write": false, 00:22:51.178 "abort": true, 00:22:51.178 "seek_hole": false, 00:22:51.178 "seek_data": false, 00:22:51.178 "copy": true, 00:22:51.178 "nvme_iov_md": false 00:22:51.178 }, 00:22:51.178 "memory_domains": [ 00:22:51.178 { 00:22:51.178 "dma_device_id": "system", 00:22:51.178 "dma_device_type": 1 00:22:51.178 }, 00:22:51.178 { 00:22:51.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.178 "dma_device_type": 2 00:22:51.178 } 00:22:51.178 ], 00:22:51.178 "driver_specific": { 00:22:51.178 "passthru": { 00:22:51.178 "name": "pt1", 00:22:51.178 "base_bdev_name": "malloc1" 00:22:51.178 } 00:22:51.178 } 00:22:51.178 }' 00:22:51.178 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:51.178 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:51.178 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:51.178 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:51.178 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:51.178 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:51.178 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:51.178 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:51.436 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:51.436 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:51.436 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:51.436 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:51.436 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:51.436 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:51.436 00:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:51.694 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:51.694 "name": "pt2", 00:22:51.694 "aliases": [ 00:22:51.694 "00000000-0000-0000-0000-000000000002" 00:22:51.694 ], 00:22:51.694 "product_name": "passthru", 00:22:51.694 "block_size": 512, 00:22:51.694 "num_blocks": 65536, 00:22:51.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:51.694 "assigned_rate_limits": { 00:22:51.694 "rw_ios_per_sec": 0, 00:22:51.694 "rw_mbytes_per_sec": 0, 00:22:51.694 "r_mbytes_per_sec": 0, 00:22:51.694 "w_mbytes_per_sec": 0 00:22:51.694 }, 00:22:51.694 "claimed": true, 00:22:51.694 "claim_type": "exclusive_write", 00:22:51.694 "zoned": false, 00:22:51.694 "supported_io_types": { 00:22:51.694 "read": true, 00:22:51.694 "write": true, 00:22:51.694 "unmap": true, 00:22:51.694 "flush": true, 00:22:51.694 "reset": true, 00:22:51.694 "nvme_admin": false, 00:22:51.694 "nvme_io": false, 00:22:51.694 "nvme_io_md": false, 00:22:51.694 "write_zeroes": true, 00:22:51.694 "zcopy": true, 00:22:51.694 "get_zone_info": false, 00:22:51.694 "zone_management": false, 00:22:51.694 "zone_append": false, 00:22:51.694 "compare": false, 00:22:51.694 "compare_and_write": false, 00:22:51.694 "abort": true, 00:22:51.694 "seek_hole": false, 00:22:51.694 "seek_data": false, 00:22:51.694 "copy": true, 00:22:51.694 "nvme_iov_md": false 00:22:51.694 }, 00:22:51.694 "memory_domains": [ 00:22:51.694 { 00:22:51.694 "dma_device_id": "system", 00:22:51.694 "dma_device_type": 1 00:22:51.694 }, 00:22:51.694 { 00:22:51.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.694 "dma_device_type": 2 00:22:51.694 } 00:22:51.694 ], 00:22:51.694 "driver_specific": { 00:22:51.694 "passthru": { 00:22:51.694 "name": "pt2", 00:22:51.694 "base_bdev_name": "malloc2" 00:22:51.694 } 00:22:51.694 } 00:22:51.694 }' 00:22:51.695 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:51.695 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:51.695 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:51.695 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:51.695 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:51.695 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:51.695 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:51.695 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:51.695 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:51.695 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:51.953 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:51.953 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:51.953 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:51.953 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:51.953 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:51.953 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:51.953 "name": "pt3", 00:22:51.953 "aliases": [ 00:22:51.953 "00000000-0000-0000-0000-000000000003" 00:22:51.953 ], 00:22:51.953 "product_name": "passthru", 00:22:51.953 "block_size": 512, 00:22:51.953 "num_blocks": 65536, 00:22:51.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:51.953 "assigned_rate_limits": { 00:22:51.953 "rw_ios_per_sec": 0, 00:22:51.953 "rw_mbytes_per_sec": 0, 00:22:51.953 "r_mbytes_per_sec": 0, 00:22:51.953 "w_mbytes_per_sec": 0 00:22:51.953 }, 00:22:51.953 "claimed": true, 00:22:51.953 "claim_type": "exclusive_write", 00:22:51.953 "zoned": false, 00:22:51.953 "supported_io_types": { 00:22:51.953 "read": true, 00:22:51.953 "write": true, 00:22:51.953 "unmap": true, 00:22:51.953 "flush": true, 00:22:51.953 "reset": true, 00:22:51.953 "nvme_admin": false, 00:22:51.953 "nvme_io": false, 00:22:51.953 "nvme_io_md": false, 00:22:51.953 "write_zeroes": true, 00:22:51.953 "zcopy": true, 00:22:51.953 "get_zone_info": false, 00:22:51.953 "zone_management": false, 00:22:51.953 "zone_append": false, 00:22:51.953 "compare": false, 00:22:51.953 "compare_and_write": false, 00:22:51.953 "abort": true, 00:22:51.953 "seek_hole": false, 00:22:51.953 "seek_data": false, 00:22:51.953 "copy": true, 00:22:51.953 "nvme_iov_md": false 00:22:51.953 }, 00:22:51.953 "memory_domains": [ 00:22:51.953 { 00:22:51.953 "dma_device_id": "system", 00:22:51.953 "dma_device_type": 1 00:22:51.953 }, 00:22:51.953 { 00:22:51.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.953 "dma_device_type": 2 00:22:51.953 } 00:22:51.953 ], 00:22:51.953 "driver_specific": { 00:22:51.953 "passthru": { 00:22:51.953 "name": "pt3", 00:22:51.953 "base_bdev_name": "malloc3" 00:22:51.953 } 00:22:51.953 } 00:22:51.953 }' 00:22:51.953 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:52.212 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:52.212 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:52.212 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:52.212 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:52.212 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:52.212 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:52.212 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:52.212 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:52.212 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:52.471 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:52.471 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:52.471 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:52.471 00:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:52.729 [2024-07-25 00:49:15.173351] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:52.729 00:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7e0f5089-0098-49eb-8785-1ad594dc0552 00:22:52.729 00:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 7e0f5089-0098-49eb-8785-1ad594dc0552 ']' 00:22:52.729 00:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:52.729 [2024-07-25 00:49:15.353107] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:52.729 [2024-07-25 00:49:15.353246] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:52.729 [2024-07-25 00:49:15.353462] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:52.729 [2024-07-25 00:49:15.353639] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:52.729 [2024-07-25 00:49:15.353718] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:22:52.729 00:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.729 00:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:52.987 00:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:52.987 00:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:52.987 00:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:52.987 00:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:53.245 00:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:53.245 00:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:53.503 00:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:53.503 00:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:53.762 00:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:53.762 00:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:54.021 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:54.281 [2024-07-25 00:49:16.685318] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:54.281 [2024-07-25 00:49:16.687306] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:54.281 [2024-07-25 00:49:16.687482] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:54.281 [2024-07-25 00:49:16.687563] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:54.281 [2024-07-25 00:49:16.687725] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:54.281 [2024-07-25 00:49:16.687867] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:54.281 [2024-07-25 00:49:16.687923] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:54.281 [2024-07-25 00:49:16.688057] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:22:54.281 request: 00:22:54.281 { 00:22:54.281 "name": "raid_bdev1", 00:22:54.281 "raid_level": "raid1", 00:22:54.281 "base_bdevs": [ 00:22:54.281 "malloc1", 00:22:54.281 "malloc2", 00:22:54.281 "malloc3" 00:22:54.281 ], 00:22:54.281 "superblock": false, 00:22:54.281 "method": "bdev_raid_create", 00:22:54.281 "req_id": 1 00:22:54.281 } 00:22:54.281 Got JSON-RPC error response 00:22:54.281 response: 00:22:54.281 { 00:22:54.281 "code": -17, 00:22:54.281 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:54.281 } 00:22:54.281 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:22:54.281 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:54.281 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:54.281 00:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:54.281 00:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:54.281 00:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.281 00:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:54.281 00:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:54.281 00:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:54.540 [2024-07-25 00:49:17.041298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:54.540 [2024-07-25 00:49:17.041537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.540 [2024-07-25 00:49:17.041603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:54.540 [2024-07-25 00:49:17.041681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.540 [2024-07-25 00:49:17.043980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.540 [2024-07-25 00:49:17.044133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:54.540 [2024-07-25 00:49:17.044325] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:54.540 [2024-07-25 00:49:17.044455] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:54.540 pt1 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.540 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.800 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.800 "name": "raid_bdev1", 00:22:54.800 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:22:54.800 "strip_size_kb": 0, 00:22:54.800 "state": "configuring", 00:22:54.800 "raid_level": "raid1", 00:22:54.800 "superblock": true, 00:22:54.800 "num_base_bdevs": 3, 00:22:54.800 "num_base_bdevs_discovered": 1, 00:22:54.800 "num_base_bdevs_operational": 3, 00:22:54.800 "base_bdevs_list": [ 00:22:54.800 { 00:22:54.800 "name": "pt1", 00:22:54.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:54.800 "is_configured": true, 00:22:54.800 "data_offset": 2048, 00:22:54.800 "data_size": 63488 00:22:54.800 }, 00:22:54.800 { 00:22:54.800 "name": null, 00:22:54.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:54.800 "is_configured": false, 00:22:54.800 "data_offset": 2048, 00:22:54.800 "data_size": 63488 00:22:54.800 }, 00:22:54.800 { 00:22:54.800 "name": null, 00:22:54.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:54.800 "is_configured": false, 00:22:54.800 "data_offset": 2048, 00:22:54.800 "data_size": 63488 00:22:54.800 } 00:22:54.800 ] 00:22:54.800 }' 00:22:54.800 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.800 00:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.368 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:22:55.368 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:55.368 [2024-07-25 00:49:17.973522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:55.368 [2024-07-25 00:49:17.973769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.368 [2024-07-25 00:49:17.973839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:55.368 [2024-07-25 00:49:17.973933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.368 [2024-07-25 00:49:17.974465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.368 [2024-07-25 00:49:17.974607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:55.368 [2024-07-25 00:49:17.974799] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:55.368 [2024-07-25 00:49:17.974903] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:55.368 pt2 00:22:55.368 00:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:55.627 [2024-07-25 00:49:18.173566] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.627 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.886 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:55.886 "name": "raid_bdev1", 00:22:55.886 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:22:55.886 "strip_size_kb": 0, 00:22:55.886 "state": "configuring", 00:22:55.886 "raid_level": "raid1", 00:22:55.886 "superblock": true, 00:22:55.886 "num_base_bdevs": 3, 00:22:55.886 "num_base_bdevs_discovered": 1, 00:22:55.886 "num_base_bdevs_operational": 3, 00:22:55.886 "base_bdevs_list": [ 00:22:55.886 { 00:22:55.886 "name": "pt1", 00:22:55.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:55.886 "is_configured": true, 00:22:55.886 "data_offset": 2048, 00:22:55.886 "data_size": 63488 00:22:55.886 }, 00:22:55.886 { 00:22:55.886 "name": null, 00:22:55.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:55.886 "is_configured": false, 00:22:55.886 "data_offset": 2048, 00:22:55.886 "data_size": 63488 00:22:55.886 }, 00:22:55.886 { 00:22:55.886 "name": null, 00:22:55.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:55.886 "is_configured": false, 00:22:55.886 "data_offset": 2048, 00:22:55.886 "data_size": 63488 00:22:55.886 } 00:22:55.886 ] 00:22:55.886 }' 00:22:55.886 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:55.886 00:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.453 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:56.453 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:56.453 00:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:56.453 [2024-07-25 00:49:19.057701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:56.453 [2024-07-25 00:49:19.057913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.453 [2024-07-25 00:49:19.057973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:56.453 [2024-07-25 00:49:19.058065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.453 [2024-07-25 00:49:19.058564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.453 [2024-07-25 00:49:19.058719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:56.453 [2024-07-25 00:49:19.058914] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:56.453 [2024-07-25 00:49:19.059015] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:56.453 pt2 00:22:56.453 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:56.453 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:56.453 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:56.713 [2024-07-25 00:49:19.289723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:56.713 [2024-07-25 00:49:19.289889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.713 [2024-07-25 00:49:19.289943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:56.713 [2024-07-25 00:49:19.290037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.713 [2024-07-25 00:49:19.290488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.713 [2024-07-25 00:49:19.290647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:56.713 [2024-07-25 00:49:19.290819] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:56.713 [2024-07-25 00:49:19.290911] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:56.713 [2024-07-25 00:49:19.291053] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:22:56.713 [2024-07-25 00:49:19.291198] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:56.713 [2024-07-25 00:49:19.291335] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:56.713 [2024-07-25 00:49:19.291761] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:22:56.713 [2024-07-25 00:49:19.291866] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:22:56.713 [2024-07-25 00:49:19.292067] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.713 pt3 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.713 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.972 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.972 "name": "raid_bdev1", 00:22:56.972 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:22:56.972 "strip_size_kb": 0, 00:22:56.972 "state": "online", 00:22:56.972 "raid_level": "raid1", 00:22:56.972 "superblock": true, 00:22:56.972 "num_base_bdevs": 3, 00:22:56.972 "num_base_bdevs_discovered": 3, 00:22:56.972 "num_base_bdevs_operational": 3, 00:22:56.972 "base_bdevs_list": [ 00:22:56.972 { 00:22:56.972 "name": "pt1", 00:22:56.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:56.972 "is_configured": true, 00:22:56.972 "data_offset": 2048, 00:22:56.972 "data_size": 63488 00:22:56.972 }, 00:22:56.972 { 00:22:56.972 "name": "pt2", 00:22:56.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:56.972 "is_configured": true, 00:22:56.972 "data_offset": 2048, 00:22:56.972 "data_size": 63488 00:22:56.972 }, 00:22:56.972 { 00:22:56.972 "name": "pt3", 00:22:56.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:56.972 "is_configured": true, 00:22:56.972 "data_offset": 2048, 00:22:56.972 "data_size": 63488 00:22:56.972 } 00:22:56.972 ] 00:22:56.972 }' 00:22:56.972 00:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.972 00:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.541 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:57.541 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:57.541 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:57.541 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:57.541 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:57.541 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:57.541 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:57.541 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:57.801 [2024-07-25 00:49:20.306168] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.801 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:57.801 "name": "raid_bdev1", 00:22:57.801 "aliases": [ 00:22:57.801 "7e0f5089-0098-49eb-8785-1ad594dc0552" 00:22:57.801 ], 00:22:57.801 "product_name": "Raid Volume", 00:22:57.801 "block_size": 512, 00:22:57.801 "num_blocks": 63488, 00:22:57.801 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:22:57.801 "assigned_rate_limits": { 00:22:57.801 "rw_ios_per_sec": 0, 00:22:57.801 "rw_mbytes_per_sec": 0, 00:22:57.801 "r_mbytes_per_sec": 0, 00:22:57.801 "w_mbytes_per_sec": 0 00:22:57.801 }, 00:22:57.801 "claimed": false, 00:22:57.801 "zoned": false, 00:22:57.801 "supported_io_types": { 00:22:57.801 "read": true, 00:22:57.801 "write": true, 00:22:57.801 "unmap": false, 00:22:57.801 "flush": false, 00:22:57.801 "reset": true, 00:22:57.801 "nvme_admin": false, 00:22:57.801 "nvme_io": false, 00:22:57.801 "nvme_io_md": false, 00:22:57.801 "write_zeroes": true, 00:22:57.801 "zcopy": false, 00:22:57.801 "get_zone_info": false, 00:22:57.801 "zone_management": false, 00:22:57.801 "zone_append": false, 00:22:57.801 "compare": false, 00:22:57.801 "compare_and_write": false, 00:22:57.801 "abort": false, 00:22:57.801 "seek_hole": false, 00:22:57.801 "seek_data": false, 00:22:57.801 "copy": false, 00:22:57.801 "nvme_iov_md": false 00:22:57.801 }, 00:22:57.801 "memory_domains": [ 00:22:57.801 { 00:22:57.801 "dma_device_id": "system", 00:22:57.801 "dma_device_type": 1 00:22:57.801 }, 00:22:57.801 { 00:22:57.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.801 "dma_device_type": 2 00:22:57.801 }, 00:22:57.801 { 00:22:57.801 "dma_device_id": "system", 00:22:57.801 "dma_device_type": 1 00:22:57.801 }, 00:22:57.801 { 00:22:57.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.801 "dma_device_type": 2 00:22:57.801 }, 00:22:57.801 { 00:22:57.801 "dma_device_id": "system", 00:22:57.801 "dma_device_type": 1 00:22:57.801 }, 00:22:57.801 { 00:22:57.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.801 "dma_device_type": 2 00:22:57.801 } 00:22:57.801 ], 00:22:57.801 "driver_specific": { 00:22:57.801 "raid": { 00:22:57.801 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:22:57.801 "strip_size_kb": 0, 00:22:57.801 "state": "online", 00:22:57.801 "raid_level": "raid1", 00:22:57.801 "superblock": true, 00:22:57.801 "num_base_bdevs": 3, 00:22:57.801 "num_base_bdevs_discovered": 3, 00:22:57.801 "num_base_bdevs_operational": 3, 00:22:57.801 "base_bdevs_list": [ 00:22:57.801 { 00:22:57.801 "name": "pt1", 00:22:57.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:57.801 "is_configured": true, 00:22:57.801 "data_offset": 2048, 00:22:57.801 "data_size": 63488 00:22:57.801 }, 00:22:57.801 { 00:22:57.801 "name": "pt2", 00:22:57.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:57.801 "is_configured": true, 00:22:57.801 "data_offset": 2048, 00:22:57.801 "data_size": 63488 00:22:57.801 }, 00:22:57.801 { 00:22:57.801 "name": "pt3", 00:22:57.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:57.801 "is_configured": true, 00:22:57.801 "data_offset": 2048, 00:22:57.801 "data_size": 63488 00:22:57.801 } 00:22:57.801 ] 00:22:57.801 } 00:22:57.801 } 00:22:57.801 }' 00:22:57.801 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:57.801 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:57.801 pt2 00:22:57.801 pt3' 00:22:57.801 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:57.801 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:57.801 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:58.060 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:58.060 "name": "pt1", 00:22:58.060 "aliases": [ 00:22:58.060 "00000000-0000-0000-0000-000000000001" 00:22:58.060 ], 00:22:58.060 "product_name": "passthru", 00:22:58.060 "block_size": 512, 00:22:58.060 "num_blocks": 65536, 00:22:58.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:58.060 "assigned_rate_limits": { 00:22:58.060 "rw_ios_per_sec": 0, 00:22:58.060 "rw_mbytes_per_sec": 0, 00:22:58.060 "r_mbytes_per_sec": 0, 00:22:58.060 "w_mbytes_per_sec": 0 00:22:58.060 }, 00:22:58.060 "claimed": true, 00:22:58.060 "claim_type": "exclusive_write", 00:22:58.060 "zoned": false, 00:22:58.060 "supported_io_types": { 00:22:58.060 "read": true, 00:22:58.060 "write": true, 00:22:58.060 "unmap": true, 00:22:58.060 "flush": true, 00:22:58.060 "reset": true, 00:22:58.060 "nvme_admin": false, 00:22:58.060 "nvme_io": false, 00:22:58.060 "nvme_io_md": false, 00:22:58.060 "write_zeroes": true, 00:22:58.060 "zcopy": true, 00:22:58.060 "get_zone_info": false, 00:22:58.060 "zone_management": false, 00:22:58.060 "zone_append": false, 00:22:58.060 "compare": false, 00:22:58.060 "compare_and_write": false, 00:22:58.060 "abort": true, 00:22:58.060 "seek_hole": false, 00:22:58.060 "seek_data": false, 00:22:58.060 "copy": true, 00:22:58.060 "nvme_iov_md": false 00:22:58.060 }, 00:22:58.060 "memory_domains": [ 00:22:58.060 { 00:22:58.060 "dma_device_id": "system", 00:22:58.060 "dma_device_type": 1 00:22:58.060 }, 00:22:58.060 { 00:22:58.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.060 "dma_device_type": 2 00:22:58.060 } 00:22:58.060 ], 00:22:58.060 "driver_specific": { 00:22:58.060 "passthru": { 00:22:58.060 "name": "pt1", 00:22:58.060 "base_bdev_name": "malloc1" 00:22:58.060 } 00:22:58.060 } 00:22:58.060 }' 00:22:58.060 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.060 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.060 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:58.060 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.320 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.320 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:58.320 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.320 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.320 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.320 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.320 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.320 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.320 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:58.320 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:58.320 00:49:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:58.579 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:58.579 "name": "pt2", 00:22:58.579 "aliases": [ 00:22:58.579 "00000000-0000-0000-0000-000000000002" 00:22:58.579 ], 00:22:58.579 "product_name": "passthru", 00:22:58.579 "block_size": 512, 00:22:58.579 "num_blocks": 65536, 00:22:58.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.579 "assigned_rate_limits": { 00:22:58.579 "rw_ios_per_sec": 0, 00:22:58.579 "rw_mbytes_per_sec": 0, 00:22:58.579 "r_mbytes_per_sec": 0, 00:22:58.579 "w_mbytes_per_sec": 0 00:22:58.579 }, 00:22:58.579 "claimed": true, 00:22:58.579 "claim_type": "exclusive_write", 00:22:58.579 "zoned": false, 00:22:58.579 "supported_io_types": { 00:22:58.579 "read": true, 00:22:58.579 "write": true, 00:22:58.579 "unmap": true, 00:22:58.579 "flush": true, 00:22:58.579 "reset": true, 00:22:58.579 "nvme_admin": false, 00:22:58.579 "nvme_io": false, 00:22:58.579 "nvme_io_md": false, 00:22:58.579 "write_zeroes": true, 00:22:58.579 "zcopy": true, 00:22:58.579 "get_zone_info": false, 00:22:58.579 "zone_management": false, 00:22:58.579 "zone_append": false, 00:22:58.579 "compare": false, 00:22:58.579 "compare_and_write": false, 00:22:58.579 "abort": true, 00:22:58.579 "seek_hole": false, 00:22:58.579 "seek_data": false, 00:22:58.579 "copy": true, 00:22:58.579 "nvme_iov_md": false 00:22:58.579 }, 00:22:58.579 "memory_domains": [ 00:22:58.579 { 00:22:58.579 "dma_device_id": "system", 00:22:58.579 "dma_device_type": 1 00:22:58.579 }, 00:22:58.579 { 00:22:58.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.579 "dma_device_type": 2 00:22:58.579 } 00:22:58.579 ], 00:22:58.579 "driver_specific": { 00:22:58.579 "passthru": { 00:22:58.579 "name": "pt2", 00:22:58.579 "base_bdev_name": "malloc2" 00:22:58.579 } 00:22:58.579 } 00:22:58.579 }' 00:22:58.579 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.837 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.837 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:58.837 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.837 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.837 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:58.837 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.838 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:59.096 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:59.096 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.096 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.096 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:59.096 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:59.096 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:59.096 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:59.355 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:59.355 "name": "pt3", 00:22:59.355 "aliases": [ 00:22:59.355 "00000000-0000-0000-0000-000000000003" 00:22:59.355 ], 00:22:59.355 "product_name": "passthru", 00:22:59.355 "block_size": 512, 00:22:59.355 "num_blocks": 65536, 00:22:59.355 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.355 "assigned_rate_limits": { 00:22:59.355 "rw_ios_per_sec": 0, 00:22:59.355 "rw_mbytes_per_sec": 0, 00:22:59.355 "r_mbytes_per_sec": 0, 00:22:59.355 "w_mbytes_per_sec": 0 00:22:59.355 }, 00:22:59.355 "claimed": true, 00:22:59.355 "claim_type": "exclusive_write", 00:22:59.355 "zoned": false, 00:22:59.355 "supported_io_types": { 00:22:59.355 "read": true, 00:22:59.355 "write": true, 00:22:59.355 "unmap": true, 00:22:59.355 "flush": true, 00:22:59.355 "reset": true, 00:22:59.355 "nvme_admin": false, 00:22:59.355 "nvme_io": false, 00:22:59.355 "nvme_io_md": false, 00:22:59.355 "write_zeroes": true, 00:22:59.355 "zcopy": true, 00:22:59.355 "get_zone_info": false, 00:22:59.355 "zone_management": false, 00:22:59.355 "zone_append": false, 00:22:59.355 "compare": false, 00:22:59.355 "compare_and_write": false, 00:22:59.355 "abort": true, 00:22:59.355 "seek_hole": false, 00:22:59.355 "seek_data": false, 00:22:59.355 "copy": true, 00:22:59.355 "nvme_iov_md": false 00:22:59.355 }, 00:22:59.355 "memory_domains": [ 00:22:59.355 { 00:22:59.355 "dma_device_id": "system", 00:22:59.355 "dma_device_type": 1 00:22:59.355 }, 00:22:59.355 { 00:22:59.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.355 "dma_device_type": 2 00:22:59.355 } 00:22:59.355 ], 00:22:59.355 "driver_specific": { 00:22:59.355 "passthru": { 00:22:59.355 "name": "pt3", 00:22:59.355 "base_bdev_name": "malloc3" 00:22:59.355 } 00:22:59.355 } 00:22:59.355 }' 00:22:59.355 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:59.355 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:59.355 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:59.355 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:59.355 00:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:59.613 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:59.613 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:59.613 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:59.613 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:59.613 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.613 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.613 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:59.613 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:59.613 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:59.871 [2024-07-25 00:49:22.384496] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:59.871 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 7e0f5089-0098-49eb-8785-1ad594dc0552 '!=' 7e0f5089-0098-49eb-8785-1ad594dc0552 ']' 00:22:59.871 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:22:59.871 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:59.872 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:59.872 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:00.130 [2024-07-25 00:49:22.652391] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.130 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.389 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.389 "name": "raid_bdev1", 00:23:00.389 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:23:00.389 "strip_size_kb": 0, 00:23:00.389 "state": "online", 00:23:00.389 "raid_level": "raid1", 00:23:00.389 "superblock": true, 00:23:00.389 "num_base_bdevs": 3, 00:23:00.389 "num_base_bdevs_discovered": 2, 00:23:00.389 "num_base_bdevs_operational": 2, 00:23:00.389 "base_bdevs_list": [ 00:23:00.389 { 00:23:00.389 "name": null, 00:23:00.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.389 "is_configured": false, 00:23:00.389 "data_offset": 2048, 00:23:00.389 "data_size": 63488 00:23:00.389 }, 00:23:00.389 { 00:23:00.389 "name": "pt2", 00:23:00.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.389 "is_configured": true, 00:23:00.389 "data_offset": 2048, 00:23:00.389 "data_size": 63488 00:23:00.389 }, 00:23:00.389 { 00:23:00.389 "name": "pt3", 00:23:00.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:00.389 "is_configured": true, 00:23:00.389 "data_offset": 2048, 00:23:00.389 "data_size": 63488 00:23:00.389 } 00:23:00.389 ] 00:23:00.389 }' 00:23:00.389 00:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.389 00:49:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.956 00:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:01.215 [2024-07-25 00:49:23.712535] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:01.215 [2024-07-25 00:49:23.712565] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:01.215 [2024-07-25 00:49:23.712646] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.215 [2024-07-25 00:49:23.712704] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.215 [2024-07-25 00:49:23.712713] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:23:01.215 00:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.215 00:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:23:01.473 00:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:23:01.473 00:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:23:01.473 00:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:23:01.473 00:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:01.473 00:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:01.473 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:01.473 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:01.473 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:01.732 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:01.732 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:01.732 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:23:01.732 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:01.732 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:01.990 [2024-07-25 00:49:24.408617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:01.991 [2024-07-25 00:49:24.408687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.991 [2024-07-25 00:49:24.408737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:01.991 [2024-07-25 00:49:24.408762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.991 [2024-07-25 00:49:24.411033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.991 [2024-07-25 00:49:24.411096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:01.991 [2024-07-25 00:49:24.411207] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:01.991 [2024-07-25 00:49:24.411250] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:01.991 pt2 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.991 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.250 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.250 "name": "raid_bdev1", 00:23:02.250 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:23:02.250 "strip_size_kb": 0, 00:23:02.250 "state": "configuring", 00:23:02.250 "raid_level": "raid1", 00:23:02.250 "superblock": true, 00:23:02.250 "num_base_bdevs": 3, 00:23:02.250 "num_base_bdevs_discovered": 1, 00:23:02.250 "num_base_bdevs_operational": 2, 00:23:02.250 "base_bdevs_list": [ 00:23:02.250 { 00:23:02.250 "name": null, 00:23:02.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.250 "is_configured": false, 00:23:02.250 "data_offset": 2048, 00:23:02.250 "data_size": 63488 00:23:02.250 }, 00:23:02.250 { 00:23:02.250 "name": "pt2", 00:23:02.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:02.250 "is_configured": true, 00:23:02.250 "data_offset": 2048, 00:23:02.250 "data_size": 63488 00:23:02.250 }, 00:23:02.250 { 00:23:02.250 "name": null, 00:23:02.250 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:02.250 "is_configured": false, 00:23:02.250 "data_offset": 2048, 00:23:02.250 "data_size": 63488 00:23:02.250 } 00:23:02.250 ] 00:23:02.250 }' 00:23:02.250 00:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.250 00:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:02.850 [2024-07-25 00:49:25.468829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:02.850 [2024-07-25 00:49:25.468904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:02.850 [2024-07-25 00:49:25.468943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:02.850 [2024-07-25 00:49:25.468964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:02.850 [2024-07-25 00:49:25.469399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:02.850 [2024-07-25 00:49:25.469433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:02.850 [2024-07-25 00:49:25.469542] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:02.850 [2024-07-25 00:49:25.469564] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:02.850 [2024-07-25 00:49:25.469662] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:23:02.850 [2024-07-25 00:49:25.469670] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:02.850 [2024-07-25 00:49:25.469773] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:02.850 [2024-07-25 00:49:25.470045] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:23:02.850 [2024-07-25 00:49:25.470063] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:23:02.850 [2024-07-25 00:49:25.470187] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.850 pt3 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.850 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.110 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.110 "name": "raid_bdev1", 00:23:03.110 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:23:03.110 "strip_size_kb": 0, 00:23:03.110 "state": "online", 00:23:03.110 "raid_level": "raid1", 00:23:03.110 "superblock": true, 00:23:03.110 "num_base_bdevs": 3, 00:23:03.110 "num_base_bdevs_discovered": 2, 00:23:03.110 "num_base_bdevs_operational": 2, 00:23:03.110 "base_bdevs_list": [ 00:23:03.110 { 00:23:03.110 "name": null, 00:23:03.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.110 "is_configured": false, 00:23:03.110 "data_offset": 2048, 00:23:03.110 "data_size": 63488 00:23:03.110 }, 00:23:03.110 { 00:23:03.110 "name": "pt2", 00:23:03.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:03.110 "is_configured": true, 00:23:03.110 "data_offset": 2048, 00:23:03.110 "data_size": 63488 00:23:03.110 }, 00:23:03.110 { 00:23:03.110 "name": "pt3", 00:23:03.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:03.110 "is_configured": true, 00:23:03.110 "data_offset": 2048, 00:23:03.110 "data_size": 63488 00:23:03.110 } 00:23:03.110 ] 00:23:03.110 }' 00:23:03.110 00:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.110 00:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.678 00:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:03.937 [2024-07-25 00:49:26.552997] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:03.937 [2024-07-25 00:49:26.553026] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:03.937 [2024-07-25 00:49:26.553083] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:03.937 [2024-07-25 00:49:26.553137] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:03.937 [2024-07-25 00:49:26.553145] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:23:03.937 00:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.937 00:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:23:04.197 00:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:23:04.197 00:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:23:04.197 00:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:23:04.197 00:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:23:04.197 00:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:04.456 00:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:04.715 [2024-07-25 00:49:27.141089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:04.715 [2024-07-25 00:49:27.141155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.715 [2024-07-25 00:49:27.141191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:04.715 [2024-07-25 00:49:27.141219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.715 [2024-07-25 00:49:27.143503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.715 [2024-07-25 00:49:27.143556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:04.715 [2024-07-25 00:49:27.143645] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:04.715 [2024-07-25 00:49:27.143682] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:04.715 [2024-07-25 00:49:27.143810] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:04.715 [2024-07-25 00:49:27.143819] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:04.715 [2024-07-25 00:49:27.143838] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:23:04.715 [2024-07-25 00:49:27.143948] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:04.715 pt1 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.715 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.975 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:04.975 "name": "raid_bdev1", 00:23:04.975 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:23:04.975 "strip_size_kb": 0, 00:23:04.975 "state": "configuring", 00:23:04.975 "raid_level": "raid1", 00:23:04.975 "superblock": true, 00:23:04.975 "num_base_bdevs": 3, 00:23:04.975 "num_base_bdevs_discovered": 1, 00:23:04.975 "num_base_bdevs_operational": 2, 00:23:04.975 "base_bdevs_list": [ 00:23:04.975 { 00:23:04.975 "name": null, 00:23:04.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.975 "is_configured": false, 00:23:04.975 "data_offset": 2048, 00:23:04.975 "data_size": 63488 00:23:04.975 }, 00:23:04.975 { 00:23:04.975 "name": "pt2", 00:23:04.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:04.975 "is_configured": true, 00:23:04.975 "data_offset": 2048, 00:23:04.975 "data_size": 63488 00:23:04.975 }, 00:23:04.975 { 00:23:04.975 "name": null, 00:23:04.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:04.975 "is_configured": false, 00:23:04.975 "data_offset": 2048, 00:23:04.975 "data_size": 63488 00:23:04.975 } 00:23:04.975 ] 00:23:04.975 }' 00:23:04.975 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:04.975 00:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.544 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:23:05.544 00:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:05.544 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:23:05.544 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:05.804 [2024-07-25 00:49:28.405296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:05.804 [2024-07-25 00:49:28.405399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.804 [2024-07-25 00:49:28.405433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:05.804 [2024-07-25 00:49:28.405459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.804 [2024-07-25 00:49:28.405911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.804 [2024-07-25 00:49:28.405953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:05.804 [2024-07-25 00:49:28.406061] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:05.804 [2024-07-25 00:49:28.406082] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:05.804 [2024-07-25 00:49:28.406188] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:23:05.804 [2024-07-25 00:49:28.406197] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:05.804 [2024-07-25 00:49:28.406317] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:23:05.804 [2024-07-25 00:49:28.406600] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:23:05.804 [2024-07-25 00:49:28.406619] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:23:05.804 [2024-07-25 00:49:28.406749] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.804 pt3 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.804 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.064 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:06.064 "name": "raid_bdev1", 00:23:06.064 "uuid": "7e0f5089-0098-49eb-8785-1ad594dc0552", 00:23:06.064 "strip_size_kb": 0, 00:23:06.064 "state": "online", 00:23:06.064 "raid_level": "raid1", 00:23:06.064 "superblock": true, 00:23:06.064 "num_base_bdevs": 3, 00:23:06.064 "num_base_bdevs_discovered": 2, 00:23:06.064 "num_base_bdevs_operational": 2, 00:23:06.064 "base_bdevs_list": [ 00:23:06.064 { 00:23:06.064 "name": null, 00:23:06.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.064 "is_configured": false, 00:23:06.064 "data_offset": 2048, 00:23:06.064 "data_size": 63488 00:23:06.064 }, 00:23:06.064 { 00:23:06.064 "name": "pt2", 00:23:06.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:06.064 "is_configured": true, 00:23:06.064 "data_offset": 2048, 00:23:06.064 "data_size": 63488 00:23:06.064 }, 00:23:06.064 { 00:23:06.064 "name": "pt3", 00:23:06.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:06.064 "is_configured": true, 00:23:06.064 "data_offset": 2048, 00:23:06.064 "data_size": 63488 00:23:06.064 } 00:23:06.064 ] 00:23:06.064 }' 00:23:06.064 00:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:06.064 00:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.633 00:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:06.633 00:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:06.893 00:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:23:06.893 00:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:06.893 00:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:23:07.153 [2024-07-25 00:49:29.709718] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 7e0f5089-0098-49eb-8785-1ad594dc0552 '!=' 7e0f5089-0098-49eb-8785-1ad594dc0552 ']' 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 133808 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 133808 ']' 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 133808 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133808 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:07.153 killing process with pid 133808 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133808' 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 133808 00:23:07.153 [2024-07-25 00:49:29.755068] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:07.153 [2024-07-25 00:49:29.755131] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:07.153 00:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 133808 00:23:07.153 [2024-07-25 00:49:29.755185] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:07.153 [2024-07-25 00:49:29.755194] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:23:07.412 [2024-07-25 00:49:30.056953] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:08.791 00:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:23:08.791 00:23:08.791 real 0m21.564s 00:23:08.791 user 0m38.400s 00:23:08.791 sys 0m3.398s 00:23:08.791 00:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:08.791 ************************************ 00:23:08.791 END TEST raid_superblock_test 00:23:08.791 ************************************ 00:23:08.791 00:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.791 00:49:31 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:23:08.791 00:49:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:08.791 00:49:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.791 00:49:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:09.051 ************************************ 00:23:09.051 START TEST raid_read_error_test 00:23:09.051 ************************************ 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.6N18dy66ZW 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=134538 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 134538 /var/tmp/spdk-raid.sock 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 134538 ']' 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:09.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.051 00:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.051 [2024-07-25 00:49:31.528383] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:23:09.051 [2024-07-25 00:49:31.528531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134538 ] 00:23:09.051 [2024-07-25 00:49:31.684844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.310 [2024-07-25 00:49:31.861919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.569 [2024-07-25 00:49:32.052008] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:09.828 00:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.828 00:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:09.828 00:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:09.828 00:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:10.086 BaseBdev1_malloc 00:23:10.086 00:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:10.346 true 00:23:10.346 00:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:10.605 [2024-07-25 00:49:33.088261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:10.605 [2024-07-25 00:49:33.088365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.605 [2024-07-25 00:49:33.088404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:23:10.605 [2024-07-25 00:49:33.088425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.605 [2024-07-25 00:49:33.090730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.605 [2024-07-25 00:49:33.090799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:10.605 BaseBdev1 00:23:10.605 00:49:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:10.605 00:49:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:10.869 BaseBdev2_malloc 00:23:10.869 00:49:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:11.129 true 00:23:11.129 00:49:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:11.129 [2024-07-25 00:49:33.719374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:11.129 [2024-07-25 00:49:33.719485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.129 [2024-07-25 00:49:33.719520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:11.129 [2024-07-25 00:49:33.719539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.129 [2024-07-25 00:49:33.721710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.129 [2024-07-25 00:49:33.721774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:11.129 BaseBdev2 00:23:11.129 00:49:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:11.129 00:49:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:11.388 BaseBdev3_malloc 00:23:11.388 00:49:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:11.646 true 00:23:11.646 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:11.646 [2024-07-25 00:49:34.263819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:11.646 [2024-07-25 00:49:34.263900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.646 [2024-07-25 00:49:34.263932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:11.646 [2024-07-25 00:49:34.263956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.646 [2024-07-25 00:49:34.266171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.646 [2024-07-25 00:49:34.266222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:11.646 BaseBdev3 00:23:11.646 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:23:11.905 [2024-07-25 00:49:34.511908] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:11.905 [2024-07-25 00:49:34.513827] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:11.905 [2024-07-25 00:49:34.513900] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:11.905 [2024-07-25 00:49:34.514116] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:23:11.905 [2024-07-25 00:49:34.514134] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:11.905 [2024-07-25 00:49:34.514256] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:11.905 [2024-07-25 00:49:34.514576] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:23:11.905 [2024-07-25 00:49:34.514593] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:23:11.905 [2024-07-25 00:49:34.514723] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.905 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.164 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.164 "name": "raid_bdev1", 00:23:12.164 "uuid": "1e63ec3e-f978-4e36-8782-a4355f340da7", 00:23:12.164 "strip_size_kb": 0, 00:23:12.164 "state": "online", 00:23:12.164 "raid_level": "raid1", 00:23:12.164 "superblock": true, 00:23:12.164 "num_base_bdevs": 3, 00:23:12.164 "num_base_bdevs_discovered": 3, 00:23:12.164 "num_base_bdevs_operational": 3, 00:23:12.164 "base_bdevs_list": [ 00:23:12.164 { 00:23:12.164 "name": "BaseBdev1", 00:23:12.164 "uuid": "4e0fb37f-20e0-50fd-bee7-1c61fe35d69e", 00:23:12.164 "is_configured": true, 00:23:12.164 "data_offset": 2048, 00:23:12.164 "data_size": 63488 00:23:12.164 }, 00:23:12.164 { 00:23:12.164 "name": "BaseBdev2", 00:23:12.164 "uuid": "2473a8e1-46d6-585f-8489-1e1a27a27d7a", 00:23:12.164 "is_configured": true, 00:23:12.164 "data_offset": 2048, 00:23:12.164 "data_size": 63488 00:23:12.164 }, 00:23:12.164 { 00:23:12.164 "name": "BaseBdev3", 00:23:12.164 "uuid": "fe8235b2-8cc3-5dde-a429-43081e6582ef", 00:23:12.164 "is_configured": true, 00:23:12.164 "data_offset": 2048, 00:23:12.164 "data_size": 63488 00:23:12.164 } 00:23:12.164 ] 00:23:12.164 }' 00:23:12.164 00:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.164 00:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.731 00:49:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:12.731 00:49:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:12.731 [2024-07-25 00:49:35.353179] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:13.666 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.925 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.183 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.183 "name": "raid_bdev1", 00:23:14.183 "uuid": "1e63ec3e-f978-4e36-8782-a4355f340da7", 00:23:14.183 "strip_size_kb": 0, 00:23:14.183 "state": "online", 00:23:14.183 "raid_level": "raid1", 00:23:14.183 "superblock": true, 00:23:14.183 "num_base_bdevs": 3, 00:23:14.183 "num_base_bdevs_discovered": 3, 00:23:14.183 "num_base_bdevs_operational": 3, 00:23:14.183 "base_bdevs_list": [ 00:23:14.183 { 00:23:14.183 "name": "BaseBdev1", 00:23:14.183 "uuid": "4e0fb37f-20e0-50fd-bee7-1c61fe35d69e", 00:23:14.183 "is_configured": true, 00:23:14.183 "data_offset": 2048, 00:23:14.183 "data_size": 63488 00:23:14.183 }, 00:23:14.183 { 00:23:14.183 "name": "BaseBdev2", 00:23:14.183 "uuid": "2473a8e1-46d6-585f-8489-1e1a27a27d7a", 00:23:14.183 "is_configured": true, 00:23:14.183 "data_offset": 2048, 00:23:14.183 "data_size": 63488 00:23:14.183 }, 00:23:14.183 { 00:23:14.183 "name": "BaseBdev3", 00:23:14.183 "uuid": "fe8235b2-8cc3-5dde-a429-43081e6582ef", 00:23:14.183 "is_configured": true, 00:23:14.183 "data_offset": 2048, 00:23:14.183 "data_size": 63488 00:23:14.183 } 00:23:14.183 ] 00:23:14.183 }' 00:23:14.183 00:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.183 00:49:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.750 00:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:15.009 [2024-07-25 00:49:37.505211] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:15.009 [2024-07-25 00:49:37.505261] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:15.009 [2024-07-25 00:49:37.507759] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:15.009 [2024-07-25 00:49:37.507808] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.009 [2024-07-25 00:49:37.507890] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:15.009 [2024-07-25 00:49:37.507898] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:23:15.009 0 00:23:15.009 00:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 134538 00:23:15.009 00:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 134538 ']' 00:23:15.009 00:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 134538 00:23:15.009 00:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:23:15.009 00:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.009 00:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134538 00:23:15.009 00:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:15.009 00:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:15.009 00:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134538' 00:23:15.009 killing process with pid 134538 00:23:15.009 00:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 134538 00:23:15.009 [2024-07-25 00:49:37.552558] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:15.009 00:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 134538 00:23:15.268 [2024-07-25 00:49:37.757824] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:16.648 00:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.6N18dy66ZW 00:23:16.648 00:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:16.648 00:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:16.648 ************************************ 00:23:16.648 END TEST raid_read_error_test 00:23:16.648 00:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:23:16.648 00:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:23:16.648 00:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:16.648 00:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:16.648 00:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:16.648 00:23:16.648 real 0m7.557s 00:23:16.648 user 0m11.153s 00:23:16.648 sys 0m0.995s 00:23:16.648 00:49:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:16.648 00:49:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.648 ************************************ 00:23:16.648 00:49:39 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:23:16.648 00:49:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:16.648 00:49:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:16.648 00:49:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:16.648 ************************************ 00:23:16.648 START TEST raid_write_error_test 00:23:16.648 ************************************ 00:23:16.648 00:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:23:16.648 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:23:16.648 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:23:16.648 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.j2bbn3blH5 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=134740 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 134740 /var/tmp/spdk-raid.sock 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 134740 ']' 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:16.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.649 00:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.649 [2024-07-25 00:49:39.177419] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:23:16.649 [2024-07-25 00:49:39.177641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134740 ] 00:23:16.908 [2024-07-25 00:49:39.359435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.908 [2024-07-25 00:49:39.530698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.167 [2024-07-25 00:49:39.714053] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:17.736 00:49:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.736 00:49:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:17.736 00:49:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:17.736 00:49:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:17.736 BaseBdev1_malloc 00:23:17.736 00:49:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:17.995 true 00:23:17.995 00:49:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:18.255 [2024-07-25 00:49:40.713471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:18.255 [2024-07-25 00:49:40.713554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.255 [2024-07-25 00:49:40.713589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:23:18.255 [2024-07-25 00:49:40.713607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.255 [2024-07-25 00:49:40.715845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.255 [2024-07-25 00:49:40.715895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:18.255 BaseBdev1 00:23:18.255 00:49:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:18.255 00:49:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:18.514 BaseBdev2_malloc 00:23:18.514 00:49:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:18.514 true 00:23:18.514 00:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:18.773 [2024-07-25 00:49:41.381343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:18.773 [2024-07-25 00:49:41.381451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.773 [2024-07-25 00:49:41.381488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:18.773 [2024-07-25 00:49:41.381508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.773 [2024-07-25 00:49:41.383686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.773 [2024-07-25 00:49:41.383969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:18.773 BaseBdev2 00:23:18.773 00:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:18.773 00:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:19.032 BaseBdev3_malloc 00:23:19.032 00:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:19.291 true 00:23:19.291 00:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:19.551 [2024-07-25 00:49:42.047328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:19.551 [2024-07-25 00:49:42.047407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.551 [2024-07-25 00:49:42.047438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:19.551 [2024-07-25 00:49:42.047460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.551 [2024-07-25 00:49:42.049587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.551 [2024-07-25 00:49:42.049638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:19.551 BaseBdev3 00:23:19.551 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:23:19.810 [2024-07-25 00:49:42.219420] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:19.810 [2024-07-25 00:49:42.221231] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:19.810 [2024-07-25 00:49:42.221305] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:19.810 [2024-07-25 00:49:42.221505] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:23:19.810 [2024-07-25 00:49:42.221515] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:19.810 [2024-07-25 00:49:42.221628] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:19.810 [2024-07-25 00:49:42.221937] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:23:19.810 [2024-07-25 00:49:42.221956] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:23:19.810 [2024-07-25 00:49:42.222091] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.810 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.810 "name": "raid_bdev1", 00:23:19.810 "uuid": "46499d56-52e5-48ad-af5a-df6c7ffb654d", 00:23:19.810 "strip_size_kb": 0, 00:23:19.810 "state": "online", 00:23:19.810 "raid_level": "raid1", 00:23:19.810 "superblock": true, 00:23:19.810 "num_base_bdevs": 3, 00:23:19.810 "num_base_bdevs_discovered": 3, 00:23:19.810 "num_base_bdevs_operational": 3, 00:23:19.810 "base_bdevs_list": [ 00:23:19.810 { 00:23:19.810 "name": "BaseBdev1", 00:23:19.811 "uuid": "c039dd81-a593-55a2-84f7-ea33191e0a28", 00:23:19.811 "is_configured": true, 00:23:19.811 "data_offset": 2048, 00:23:19.811 "data_size": 63488 00:23:19.811 }, 00:23:19.811 { 00:23:19.811 "name": "BaseBdev2", 00:23:19.811 "uuid": "e981ab15-7b6f-5f04-b30c-fe29cc3418e8", 00:23:19.811 "is_configured": true, 00:23:19.811 "data_offset": 2048, 00:23:19.811 "data_size": 63488 00:23:19.811 }, 00:23:19.811 { 00:23:19.811 "name": "BaseBdev3", 00:23:19.811 "uuid": "f71ba556-10d5-5f70-be88-40f436e02381", 00:23:19.811 "is_configured": true, 00:23:19.811 "data_offset": 2048, 00:23:19.811 "data_size": 63488 00:23:19.811 } 00:23:19.811 ] 00:23:19.811 }' 00:23:19.811 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.811 00:49:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.379 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:20.379 00:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:20.379 [2024-07-25 00:49:43.012733] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:21.314 00:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:21.572 [2024-07-25 00:49:44.118756] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:23:21.572 [2024-07-25 00:49:44.118857] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:21.572 [2024-07-25 00:49:44.119065] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:23:21.572 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:21.572 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:21.572 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:23:21.572 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:23:21.572 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:21.573 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:21.573 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:21.573 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:21.573 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:21.573 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:21.573 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.573 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.573 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.573 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.573 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.573 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.831 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.831 "name": "raid_bdev1", 00:23:21.831 "uuid": "46499d56-52e5-48ad-af5a-df6c7ffb654d", 00:23:21.831 "strip_size_kb": 0, 00:23:21.831 "state": "online", 00:23:21.831 "raid_level": "raid1", 00:23:21.831 "superblock": true, 00:23:21.831 "num_base_bdevs": 3, 00:23:21.831 "num_base_bdevs_discovered": 2, 00:23:21.831 "num_base_bdevs_operational": 2, 00:23:21.831 "base_bdevs_list": [ 00:23:21.831 { 00:23:21.831 "name": null, 00:23:21.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.831 "is_configured": false, 00:23:21.831 "data_offset": 2048, 00:23:21.831 "data_size": 63488 00:23:21.831 }, 00:23:21.831 { 00:23:21.831 "name": "BaseBdev2", 00:23:21.831 "uuid": "e981ab15-7b6f-5f04-b30c-fe29cc3418e8", 00:23:21.831 "is_configured": true, 00:23:21.831 "data_offset": 2048, 00:23:21.831 "data_size": 63488 00:23:21.831 }, 00:23:21.831 { 00:23:21.831 "name": "BaseBdev3", 00:23:21.831 "uuid": "f71ba556-10d5-5f70-be88-40f436e02381", 00:23:21.831 "is_configured": true, 00:23:21.831 "data_offset": 2048, 00:23:21.831 "data_size": 63488 00:23:21.831 } 00:23:21.831 ] 00:23:21.831 }' 00:23:21.831 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.831 00:49:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.438 00:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:22.720 [2024-07-25 00:49:45.221823] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.720 [2024-07-25 00:49:45.221862] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:22.720 [2024-07-25 00:49:45.224308] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.720 [2024-07-25 00:49:45.224349] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.720 [2024-07-25 00:49:45.224412] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.720 [2024-07-25 00:49:45.224422] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:23:22.720 0 00:23:22.720 00:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 134740 00:23:22.720 00:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 134740 ']' 00:23:22.720 00:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 134740 00:23:22.720 00:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:23:22.720 00:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.720 00:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134740 00:23:22.720 killing process with pid 134740 00:23:22.720 00:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:22.720 00:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:22.720 00:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134740' 00:23:22.720 00:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 134740 00:23:22.720 [2024-07-25 00:49:45.271914] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:22.720 00:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 134740 00:23:22.979 [2024-07-25 00:49:45.474638] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:24.358 00:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.j2bbn3blH5 00:23:24.358 00:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:24.358 00:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:24.358 ************************************ 00:23:24.358 END TEST raid_write_error_test 00:23:24.358 ************************************ 00:23:24.358 00:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:23:24.358 00:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:23:24.358 00:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:24.358 00:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:24.358 00:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:24.358 00:23:24.358 real 0m7.644s 00:23:24.358 user 0m11.233s 00:23:24.358 sys 0m1.067s 00:23:24.358 00:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:24.358 00:49:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.358 00:49:46 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:23:24.358 00:49:46 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:23:24.358 00:49:46 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:23:24.358 00:49:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:24.358 00:49:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.358 00:49:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:24.358 ************************************ 00:23:24.358 START TEST raid_state_function_test 00:23:24.358 ************************************ 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:24.358 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=134939 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 134939' 00:23:24.359 Process raid pid: 134939 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 134939 /var/tmp/spdk-raid.sock 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 134939 ']' 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.359 00:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.359 [2024-07-25 00:49:46.896270] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:23:24.359 [2024-07-25 00:49:46.896512] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.618 [2024-07-25 00:49:47.065229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.618 [2024-07-25 00:49:47.251491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.876 [2024-07-25 00:49:47.439967] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:25.443 00:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.443 00:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:23:25.443 00:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:25.443 [2024-07-25 00:49:48.051087] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:25.443 [2024-07-25 00:49:48.051172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:25.443 [2024-07-25 00:49:48.051182] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:25.444 [2024-07-25 00:49:48.051205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:25.444 [2024-07-25 00:49:48.051212] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:25.444 [2024-07-25 00:49:48.051227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:25.444 [2024-07-25 00:49:48.051234] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:25.444 [2024-07-25 00:49:48.051254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.444 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.702 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.702 "name": "Existed_Raid", 00:23:25.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.702 "strip_size_kb": 64, 00:23:25.702 "state": "configuring", 00:23:25.702 "raid_level": "raid0", 00:23:25.702 "superblock": false, 00:23:25.702 "num_base_bdevs": 4, 00:23:25.702 "num_base_bdevs_discovered": 0, 00:23:25.702 "num_base_bdevs_operational": 4, 00:23:25.702 "base_bdevs_list": [ 00:23:25.702 { 00:23:25.702 "name": "BaseBdev1", 00:23:25.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.702 "is_configured": false, 00:23:25.702 "data_offset": 0, 00:23:25.702 "data_size": 0 00:23:25.702 }, 00:23:25.702 { 00:23:25.702 "name": "BaseBdev2", 00:23:25.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.702 "is_configured": false, 00:23:25.702 "data_offset": 0, 00:23:25.702 "data_size": 0 00:23:25.702 }, 00:23:25.702 { 00:23:25.702 "name": "BaseBdev3", 00:23:25.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.702 "is_configured": false, 00:23:25.702 "data_offset": 0, 00:23:25.702 "data_size": 0 00:23:25.702 }, 00:23:25.702 { 00:23:25.702 "name": "BaseBdev4", 00:23:25.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.702 "is_configured": false, 00:23:25.702 "data_offset": 0, 00:23:25.702 "data_size": 0 00:23:25.702 } 00:23:25.702 ] 00:23:25.702 }' 00:23:25.702 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.702 00:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.274 00:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:26.532 [2024-07-25 00:49:49.035147] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:26.532 [2024-07-25 00:49:49.035183] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:26.532 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:26.791 [2024-07-25 00:49:49.199156] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:26.791 [2024-07-25 00:49:49.199206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:26.791 [2024-07-25 00:49:49.199214] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:26.791 [2024-07-25 00:49:49.199251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:26.791 [2024-07-25 00:49:49.199258] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:26.791 [2024-07-25 00:49:49.199288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:26.791 [2024-07-25 00:49:49.199295] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:26.791 [2024-07-25 00:49:49.199315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:26.791 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:26.791 [2024-07-25 00:49:49.397976] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:26.791 BaseBdev1 00:23:26.791 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:26.791 00:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:26.791 00:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:26.791 00:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:26.791 00:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:26.791 00:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:26.791 00:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:27.050 00:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:27.309 [ 00:23:27.309 { 00:23:27.309 "name": "BaseBdev1", 00:23:27.309 "aliases": [ 00:23:27.309 "e3094f8a-d017-4f94-a005-da09053aa254" 00:23:27.309 ], 00:23:27.309 "product_name": "Malloc disk", 00:23:27.309 "block_size": 512, 00:23:27.309 "num_blocks": 65536, 00:23:27.309 "uuid": "e3094f8a-d017-4f94-a005-da09053aa254", 00:23:27.309 "assigned_rate_limits": { 00:23:27.309 "rw_ios_per_sec": 0, 00:23:27.309 "rw_mbytes_per_sec": 0, 00:23:27.309 "r_mbytes_per_sec": 0, 00:23:27.309 "w_mbytes_per_sec": 0 00:23:27.309 }, 00:23:27.309 "claimed": true, 00:23:27.309 "claim_type": "exclusive_write", 00:23:27.309 "zoned": false, 00:23:27.309 "supported_io_types": { 00:23:27.309 "read": true, 00:23:27.309 "write": true, 00:23:27.309 "unmap": true, 00:23:27.309 "flush": true, 00:23:27.309 "reset": true, 00:23:27.309 "nvme_admin": false, 00:23:27.309 "nvme_io": false, 00:23:27.309 "nvme_io_md": false, 00:23:27.309 "write_zeroes": true, 00:23:27.309 "zcopy": true, 00:23:27.309 "get_zone_info": false, 00:23:27.309 "zone_management": false, 00:23:27.309 "zone_append": false, 00:23:27.309 "compare": false, 00:23:27.309 "compare_and_write": false, 00:23:27.309 "abort": true, 00:23:27.309 "seek_hole": false, 00:23:27.309 "seek_data": false, 00:23:27.309 "copy": true, 00:23:27.309 "nvme_iov_md": false 00:23:27.309 }, 00:23:27.309 "memory_domains": [ 00:23:27.309 { 00:23:27.309 "dma_device_id": "system", 00:23:27.309 "dma_device_type": 1 00:23:27.309 }, 00:23:27.309 { 00:23:27.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.309 "dma_device_type": 2 00:23:27.309 } 00:23:27.309 ], 00:23:27.309 "driver_specific": {} 00:23:27.309 } 00:23:27.309 ] 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.309 00:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.568 00:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.568 "name": "Existed_Raid", 00:23:27.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.568 "strip_size_kb": 64, 00:23:27.568 "state": "configuring", 00:23:27.568 "raid_level": "raid0", 00:23:27.568 "superblock": false, 00:23:27.568 "num_base_bdevs": 4, 00:23:27.568 "num_base_bdevs_discovered": 1, 00:23:27.568 "num_base_bdevs_operational": 4, 00:23:27.568 "base_bdevs_list": [ 00:23:27.568 { 00:23:27.568 "name": "BaseBdev1", 00:23:27.568 "uuid": "e3094f8a-d017-4f94-a005-da09053aa254", 00:23:27.568 "is_configured": true, 00:23:27.568 "data_offset": 0, 00:23:27.568 "data_size": 65536 00:23:27.568 }, 00:23:27.568 { 00:23:27.568 "name": "BaseBdev2", 00:23:27.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.568 "is_configured": false, 00:23:27.568 "data_offset": 0, 00:23:27.568 "data_size": 0 00:23:27.568 }, 00:23:27.568 { 00:23:27.568 "name": "BaseBdev3", 00:23:27.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.568 "is_configured": false, 00:23:27.568 "data_offset": 0, 00:23:27.568 "data_size": 0 00:23:27.568 }, 00:23:27.568 { 00:23:27.568 "name": "BaseBdev4", 00:23:27.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.568 "is_configured": false, 00:23:27.568 "data_offset": 0, 00:23:27.568 "data_size": 0 00:23:27.568 } 00:23:27.568 ] 00:23:27.568 }' 00:23:27.568 00:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.568 00:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.134 00:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:28.134 [2024-07-25 00:49:50.766219] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:28.134 [2024-07-25 00:49:50.766269] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:28.134 00:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:28.392 [2024-07-25 00:49:51.038294] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.392 [2024-07-25 00:49:51.040246] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:28.392 [2024-07-25 00:49:51.040307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:28.392 [2024-07-25 00:49:51.040316] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:28.392 [2024-07-25 00:49:51.040342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:28.392 [2024-07-25 00:49:51.040349] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:28.392 [2024-07-25 00:49:51.040364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:28.651 "name": "Existed_Raid", 00:23:28.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.651 "strip_size_kb": 64, 00:23:28.651 "state": "configuring", 00:23:28.651 "raid_level": "raid0", 00:23:28.651 "superblock": false, 00:23:28.651 "num_base_bdevs": 4, 00:23:28.651 "num_base_bdevs_discovered": 1, 00:23:28.651 "num_base_bdevs_operational": 4, 00:23:28.651 "base_bdevs_list": [ 00:23:28.651 { 00:23:28.651 "name": "BaseBdev1", 00:23:28.651 "uuid": "e3094f8a-d017-4f94-a005-da09053aa254", 00:23:28.651 "is_configured": true, 00:23:28.651 "data_offset": 0, 00:23:28.651 "data_size": 65536 00:23:28.651 }, 00:23:28.651 { 00:23:28.651 "name": "BaseBdev2", 00:23:28.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.651 "is_configured": false, 00:23:28.651 "data_offset": 0, 00:23:28.651 "data_size": 0 00:23:28.651 }, 00:23:28.651 { 00:23:28.651 "name": "BaseBdev3", 00:23:28.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.651 "is_configured": false, 00:23:28.651 "data_offset": 0, 00:23:28.651 "data_size": 0 00:23:28.651 }, 00:23:28.651 { 00:23:28.651 "name": "BaseBdev4", 00:23:28.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.651 "is_configured": false, 00:23:28.651 "data_offset": 0, 00:23:28.651 "data_size": 0 00:23:28.651 } 00:23:28.651 ] 00:23:28.651 }' 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:28.651 00:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.218 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:29.476 [2024-07-25 00:49:51.970611] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:29.476 BaseBdev2 00:23:29.476 00:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:29.476 00:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:29.476 00:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:29.476 00:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:29.476 00:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:29.476 00:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:29.476 00:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:29.735 00:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:29.994 [ 00:23:29.994 { 00:23:29.994 "name": "BaseBdev2", 00:23:29.994 "aliases": [ 00:23:29.994 "0d45580e-30d7-48de-a04a-dce3b0a25ba1" 00:23:29.994 ], 00:23:29.994 "product_name": "Malloc disk", 00:23:29.994 "block_size": 512, 00:23:29.994 "num_blocks": 65536, 00:23:29.994 "uuid": "0d45580e-30d7-48de-a04a-dce3b0a25ba1", 00:23:29.994 "assigned_rate_limits": { 00:23:29.994 "rw_ios_per_sec": 0, 00:23:29.994 "rw_mbytes_per_sec": 0, 00:23:29.994 "r_mbytes_per_sec": 0, 00:23:29.994 "w_mbytes_per_sec": 0 00:23:29.994 }, 00:23:29.994 "claimed": true, 00:23:29.994 "claim_type": "exclusive_write", 00:23:29.994 "zoned": false, 00:23:29.994 "supported_io_types": { 00:23:29.994 "read": true, 00:23:29.994 "write": true, 00:23:29.994 "unmap": true, 00:23:29.994 "flush": true, 00:23:29.994 "reset": true, 00:23:29.994 "nvme_admin": false, 00:23:29.994 "nvme_io": false, 00:23:29.994 "nvme_io_md": false, 00:23:29.994 "write_zeroes": true, 00:23:29.994 "zcopy": true, 00:23:29.994 "get_zone_info": false, 00:23:29.994 "zone_management": false, 00:23:29.994 "zone_append": false, 00:23:29.994 "compare": false, 00:23:29.994 "compare_and_write": false, 00:23:29.994 "abort": true, 00:23:29.994 "seek_hole": false, 00:23:29.994 "seek_data": false, 00:23:29.994 "copy": true, 00:23:29.994 "nvme_iov_md": false 00:23:29.994 }, 00:23:29.994 "memory_domains": [ 00:23:29.994 { 00:23:29.994 "dma_device_id": "system", 00:23:29.994 "dma_device_type": 1 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.994 "dma_device_type": 2 00:23:29.994 } 00:23:29.994 ], 00:23:29.994 "driver_specific": {} 00:23:29.994 } 00:23:29.994 ] 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.994 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.253 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:30.253 "name": "Existed_Raid", 00:23:30.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.253 "strip_size_kb": 64, 00:23:30.253 "state": "configuring", 00:23:30.254 "raid_level": "raid0", 00:23:30.254 "superblock": false, 00:23:30.254 "num_base_bdevs": 4, 00:23:30.254 "num_base_bdevs_discovered": 2, 00:23:30.254 "num_base_bdevs_operational": 4, 00:23:30.254 "base_bdevs_list": [ 00:23:30.254 { 00:23:30.254 "name": "BaseBdev1", 00:23:30.254 "uuid": "e3094f8a-d017-4f94-a005-da09053aa254", 00:23:30.254 "is_configured": true, 00:23:30.254 "data_offset": 0, 00:23:30.254 "data_size": 65536 00:23:30.254 }, 00:23:30.254 { 00:23:30.254 "name": "BaseBdev2", 00:23:30.254 "uuid": "0d45580e-30d7-48de-a04a-dce3b0a25ba1", 00:23:30.254 "is_configured": true, 00:23:30.254 "data_offset": 0, 00:23:30.254 "data_size": 65536 00:23:30.254 }, 00:23:30.254 { 00:23:30.254 "name": "BaseBdev3", 00:23:30.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.254 "is_configured": false, 00:23:30.254 "data_offset": 0, 00:23:30.254 "data_size": 0 00:23:30.254 }, 00:23:30.254 { 00:23:30.254 "name": "BaseBdev4", 00:23:30.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.254 "is_configured": false, 00:23:30.254 "data_offset": 0, 00:23:30.254 "data_size": 0 00:23:30.254 } 00:23:30.254 ] 00:23:30.254 }' 00:23:30.254 00:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:30.254 00:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.821 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:31.078 [2024-07-25 00:49:53.552790] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:31.078 BaseBdev3 00:23:31.078 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:31.078 00:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:31.078 00:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:31.078 00:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:31.078 00:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:31.078 00:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:31.078 00:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:31.336 [ 00:23:31.336 { 00:23:31.336 "name": "BaseBdev3", 00:23:31.336 "aliases": [ 00:23:31.336 "248029b1-8513-4b0c-b62d-5ae293b1489c" 00:23:31.336 ], 00:23:31.336 "product_name": "Malloc disk", 00:23:31.336 "block_size": 512, 00:23:31.336 "num_blocks": 65536, 00:23:31.336 "uuid": "248029b1-8513-4b0c-b62d-5ae293b1489c", 00:23:31.336 "assigned_rate_limits": { 00:23:31.336 "rw_ios_per_sec": 0, 00:23:31.336 "rw_mbytes_per_sec": 0, 00:23:31.336 "r_mbytes_per_sec": 0, 00:23:31.336 "w_mbytes_per_sec": 0 00:23:31.336 }, 00:23:31.336 "claimed": true, 00:23:31.336 "claim_type": "exclusive_write", 00:23:31.336 "zoned": false, 00:23:31.336 "supported_io_types": { 00:23:31.336 "read": true, 00:23:31.336 "write": true, 00:23:31.336 "unmap": true, 00:23:31.336 "flush": true, 00:23:31.336 "reset": true, 00:23:31.336 "nvme_admin": false, 00:23:31.336 "nvme_io": false, 00:23:31.336 "nvme_io_md": false, 00:23:31.336 "write_zeroes": true, 00:23:31.336 "zcopy": true, 00:23:31.336 "get_zone_info": false, 00:23:31.336 "zone_management": false, 00:23:31.336 "zone_append": false, 00:23:31.336 "compare": false, 00:23:31.336 "compare_and_write": false, 00:23:31.336 "abort": true, 00:23:31.336 "seek_hole": false, 00:23:31.336 "seek_data": false, 00:23:31.336 "copy": true, 00:23:31.336 "nvme_iov_md": false 00:23:31.336 }, 00:23:31.336 "memory_domains": [ 00:23:31.336 { 00:23:31.336 "dma_device_id": "system", 00:23:31.336 "dma_device_type": 1 00:23:31.336 }, 00:23:31.336 { 00:23:31.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.336 "dma_device_type": 2 00:23:31.336 } 00:23:31.336 ], 00:23:31.336 "driver_specific": {} 00:23:31.336 } 00:23:31.336 ] 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.336 00:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.594 00:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.594 "name": "Existed_Raid", 00:23:31.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.594 "strip_size_kb": 64, 00:23:31.594 "state": "configuring", 00:23:31.594 "raid_level": "raid0", 00:23:31.594 "superblock": false, 00:23:31.594 "num_base_bdevs": 4, 00:23:31.594 "num_base_bdevs_discovered": 3, 00:23:31.594 "num_base_bdevs_operational": 4, 00:23:31.594 "base_bdevs_list": [ 00:23:31.594 { 00:23:31.594 "name": "BaseBdev1", 00:23:31.594 "uuid": "e3094f8a-d017-4f94-a005-da09053aa254", 00:23:31.594 "is_configured": true, 00:23:31.594 "data_offset": 0, 00:23:31.594 "data_size": 65536 00:23:31.594 }, 00:23:31.594 { 00:23:31.594 "name": "BaseBdev2", 00:23:31.594 "uuid": "0d45580e-30d7-48de-a04a-dce3b0a25ba1", 00:23:31.594 "is_configured": true, 00:23:31.594 "data_offset": 0, 00:23:31.594 "data_size": 65536 00:23:31.594 }, 00:23:31.594 { 00:23:31.594 "name": "BaseBdev3", 00:23:31.594 "uuid": "248029b1-8513-4b0c-b62d-5ae293b1489c", 00:23:31.594 "is_configured": true, 00:23:31.594 "data_offset": 0, 00:23:31.594 "data_size": 65536 00:23:31.594 }, 00:23:31.594 { 00:23:31.594 "name": "BaseBdev4", 00:23:31.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.594 "is_configured": false, 00:23:31.594 "data_offset": 0, 00:23:31.594 "data_size": 0 00:23:31.594 } 00:23:31.594 ] 00:23:31.594 }' 00:23:31.594 00:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.594 00:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.160 00:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:32.418 [2024-07-25 00:49:55.060262] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:32.418 [2024-07-25 00:49:55.060305] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:23:32.418 [2024-07-25 00:49:55.060328] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:32.418 [2024-07-25 00:49:55.060446] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:32.418 [2024-07-25 00:49:55.060765] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:23:32.418 [2024-07-25 00:49:55.060786] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:23:32.418 [2024-07-25 00:49:55.061000] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.418 BaseBdev4 00:23:32.676 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:32.676 00:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:32.676 00:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:32.676 00:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:32.676 00:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:32.676 00:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:32.676 00:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:32.934 00:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:32.934 [ 00:23:32.934 { 00:23:32.934 "name": "BaseBdev4", 00:23:32.934 "aliases": [ 00:23:32.934 "8b07454a-253e-437d-b55b-2c2532b3e06b" 00:23:32.934 ], 00:23:32.934 "product_name": "Malloc disk", 00:23:32.934 "block_size": 512, 00:23:32.934 "num_blocks": 65536, 00:23:32.934 "uuid": "8b07454a-253e-437d-b55b-2c2532b3e06b", 00:23:32.934 "assigned_rate_limits": { 00:23:32.934 "rw_ios_per_sec": 0, 00:23:32.935 "rw_mbytes_per_sec": 0, 00:23:32.935 "r_mbytes_per_sec": 0, 00:23:32.935 "w_mbytes_per_sec": 0 00:23:32.935 }, 00:23:32.935 "claimed": true, 00:23:32.935 "claim_type": "exclusive_write", 00:23:32.935 "zoned": false, 00:23:32.935 "supported_io_types": { 00:23:32.935 "read": true, 00:23:32.935 "write": true, 00:23:32.935 "unmap": true, 00:23:32.935 "flush": true, 00:23:32.935 "reset": true, 00:23:32.935 "nvme_admin": false, 00:23:32.935 "nvme_io": false, 00:23:32.935 "nvme_io_md": false, 00:23:32.935 "write_zeroes": true, 00:23:32.935 "zcopy": true, 00:23:32.935 "get_zone_info": false, 00:23:32.935 "zone_management": false, 00:23:32.935 "zone_append": false, 00:23:32.935 "compare": false, 00:23:32.935 "compare_and_write": false, 00:23:32.935 "abort": true, 00:23:32.935 "seek_hole": false, 00:23:32.935 "seek_data": false, 00:23:32.935 "copy": true, 00:23:32.935 "nvme_iov_md": false 00:23:32.935 }, 00:23:32.935 "memory_domains": [ 00:23:32.935 { 00:23:32.935 "dma_device_id": "system", 00:23:32.935 "dma_device_type": 1 00:23:32.935 }, 00:23:32.935 { 00:23:32.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.935 "dma_device_type": 2 00:23:32.935 } 00:23:32.935 ], 00:23:32.935 "driver_specific": {} 00:23:32.935 } 00:23:32.935 ] 00:23:33.193 00:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:33.194 "name": "Existed_Raid", 00:23:33.194 "uuid": "21db692f-8f9d-4401-bb04-8249d0267330", 00:23:33.194 "strip_size_kb": 64, 00:23:33.194 "state": "online", 00:23:33.194 "raid_level": "raid0", 00:23:33.194 "superblock": false, 00:23:33.194 "num_base_bdevs": 4, 00:23:33.194 "num_base_bdevs_discovered": 4, 00:23:33.194 "num_base_bdevs_operational": 4, 00:23:33.194 "base_bdevs_list": [ 00:23:33.194 { 00:23:33.194 "name": "BaseBdev1", 00:23:33.194 "uuid": "e3094f8a-d017-4f94-a005-da09053aa254", 00:23:33.194 "is_configured": true, 00:23:33.194 "data_offset": 0, 00:23:33.194 "data_size": 65536 00:23:33.194 }, 00:23:33.194 { 00:23:33.194 "name": "BaseBdev2", 00:23:33.194 "uuid": "0d45580e-30d7-48de-a04a-dce3b0a25ba1", 00:23:33.194 "is_configured": true, 00:23:33.194 "data_offset": 0, 00:23:33.194 "data_size": 65536 00:23:33.194 }, 00:23:33.194 { 00:23:33.194 "name": "BaseBdev3", 00:23:33.194 "uuid": "248029b1-8513-4b0c-b62d-5ae293b1489c", 00:23:33.194 "is_configured": true, 00:23:33.194 "data_offset": 0, 00:23:33.194 "data_size": 65536 00:23:33.194 }, 00:23:33.194 { 00:23:33.194 "name": "BaseBdev4", 00:23:33.194 "uuid": "8b07454a-253e-437d-b55b-2c2532b3e06b", 00:23:33.194 "is_configured": true, 00:23:33.194 "data_offset": 0, 00:23:33.194 "data_size": 65536 00:23:33.194 } 00:23:33.194 ] 00:23:33.194 }' 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:33.194 00:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.131 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:34.131 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:34.131 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:34.131 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:34.131 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:34.131 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:34.131 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:34.131 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:34.131 [2024-07-25 00:49:56.640818] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:34.131 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:34.131 "name": "Existed_Raid", 00:23:34.131 "aliases": [ 00:23:34.131 "21db692f-8f9d-4401-bb04-8249d0267330" 00:23:34.131 ], 00:23:34.131 "product_name": "Raid Volume", 00:23:34.131 "block_size": 512, 00:23:34.131 "num_blocks": 262144, 00:23:34.131 "uuid": "21db692f-8f9d-4401-bb04-8249d0267330", 00:23:34.131 "assigned_rate_limits": { 00:23:34.131 "rw_ios_per_sec": 0, 00:23:34.131 "rw_mbytes_per_sec": 0, 00:23:34.131 "r_mbytes_per_sec": 0, 00:23:34.131 "w_mbytes_per_sec": 0 00:23:34.131 }, 00:23:34.131 "claimed": false, 00:23:34.131 "zoned": false, 00:23:34.131 "supported_io_types": { 00:23:34.131 "read": true, 00:23:34.131 "write": true, 00:23:34.131 "unmap": true, 00:23:34.131 "flush": true, 00:23:34.131 "reset": true, 00:23:34.131 "nvme_admin": false, 00:23:34.131 "nvme_io": false, 00:23:34.131 "nvme_io_md": false, 00:23:34.131 "write_zeroes": true, 00:23:34.131 "zcopy": false, 00:23:34.131 "get_zone_info": false, 00:23:34.131 "zone_management": false, 00:23:34.131 "zone_append": false, 00:23:34.131 "compare": false, 00:23:34.131 "compare_and_write": false, 00:23:34.131 "abort": false, 00:23:34.131 "seek_hole": false, 00:23:34.131 "seek_data": false, 00:23:34.131 "copy": false, 00:23:34.131 "nvme_iov_md": false 00:23:34.131 }, 00:23:34.131 "memory_domains": [ 00:23:34.131 { 00:23:34.131 "dma_device_id": "system", 00:23:34.131 "dma_device_type": 1 00:23:34.131 }, 00:23:34.131 { 00:23:34.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.131 "dma_device_type": 2 00:23:34.131 }, 00:23:34.131 { 00:23:34.131 "dma_device_id": "system", 00:23:34.131 "dma_device_type": 1 00:23:34.131 }, 00:23:34.131 { 00:23:34.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.131 "dma_device_type": 2 00:23:34.131 }, 00:23:34.131 { 00:23:34.131 "dma_device_id": "system", 00:23:34.131 "dma_device_type": 1 00:23:34.131 }, 00:23:34.131 { 00:23:34.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.131 "dma_device_type": 2 00:23:34.131 }, 00:23:34.131 { 00:23:34.131 "dma_device_id": "system", 00:23:34.131 "dma_device_type": 1 00:23:34.131 }, 00:23:34.131 { 00:23:34.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.131 "dma_device_type": 2 00:23:34.131 } 00:23:34.131 ], 00:23:34.131 "driver_specific": { 00:23:34.131 "raid": { 00:23:34.131 "uuid": "21db692f-8f9d-4401-bb04-8249d0267330", 00:23:34.131 "strip_size_kb": 64, 00:23:34.131 "state": "online", 00:23:34.131 "raid_level": "raid0", 00:23:34.131 "superblock": false, 00:23:34.131 "num_base_bdevs": 4, 00:23:34.131 "num_base_bdevs_discovered": 4, 00:23:34.131 "num_base_bdevs_operational": 4, 00:23:34.131 "base_bdevs_list": [ 00:23:34.131 { 00:23:34.132 "name": "BaseBdev1", 00:23:34.132 "uuid": "e3094f8a-d017-4f94-a005-da09053aa254", 00:23:34.132 "is_configured": true, 00:23:34.132 "data_offset": 0, 00:23:34.132 "data_size": 65536 00:23:34.132 }, 00:23:34.132 { 00:23:34.132 "name": "BaseBdev2", 00:23:34.132 "uuid": "0d45580e-30d7-48de-a04a-dce3b0a25ba1", 00:23:34.132 "is_configured": true, 00:23:34.132 "data_offset": 0, 00:23:34.132 "data_size": 65536 00:23:34.132 }, 00:23:34.132 { 00:23:34.132 "name": "BaseBdev3", 00:23:34.132 "uuid": "248029b1-8513-4b0c-b62d-5ae293b1489c", 00:23:34.132 "is_configured": true, 00:23:34.132 "data_offset": 0, 00:23:34.132 "data_size": 65536 00:23:34.132 }, 00:23:34.132 { 00:23:34.132 "name": "BaseBdev4", 00:23:34.132 "uuid": "8b07454a-253e-437d-b55b-2c2532b3e06b", 00:23:34.132 "is_configured": true, 00:23:34.132 "data_offset": 0, 00:23:34.132 "data_size": 65536 00:23:34.132 } 00:23:34.132 ] 00:23:34.132 } 00:23:34.132 } 00:23:34.132 }' 00:23:34.132 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:34.132 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:34.132 BaseBdev2 00:23:34.132 BaseBdev3 00:23:34.132 BaseBdev4' 00:23:34.132 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:34.132 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:34.132 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:34.391 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:34.391 "name": "BaseBdev1", 00:23:34.391 "aliases": [ 00:23:34.391 "e3094f8a-d017-4f94-a005-da09053aa254" 00:23:34.391 ], 00:23:34.391 "product_name": "Malloc disk", 00:23:34.391 "block_size": 512, 00:23:34.391 "num_blocks": 65536, 00:23:34.391 "uuid": "e3094f8a-d017-4f94-a005-da09053aa254", 00:23:34.391 "assigned_rate_limits": { 00:23:34.391 "rw_ios_per_sec": 0, 00:23:34.391 "rw_mbytes_per_sec": 0, 00:23:34.391 "r_mbytes_per_sec": 0, 00:23:34.391 "w_mbytes_per_sec": 0 00:23:34.391 }, 00:23:34.391 "claimed": true, 00:23:34.391 "claim_type": "exclusive_write", 00:23:34.391 "zoned": false, 00:23:34.391 "supported_io_types": { 00:23:34.391 "read": true, 00:23:34.391 "write": true, 00:23:34.391 "unmap": true, 00:23:34.391 "flush": true, 00:23:34.391 "reset": true, 00:23:34.391 "nvme_admin": false, 00:23:34.391 "nvme_io": false, 00:23:34.391 "nvme_io_md": false, 00:23:34.391 "write_zeroes": true, 00:23:34.391 "zcopy": true, 00:23:34.391 "get_zone_info": false, 00:23:34.391 "zone_management": false, 00:23:34.391 "zone_append": false, 00:23:34.391 "compare": false, 00:23:34.391 "compare_and_write": false, 00:23:34.391 "abort": true, 00:23:34.391 "seek_hole": false, 00:23:34.391 "seek_data": false, 00:23:34.391 "copy": true, 00:23:34.391 "nvme_iov_md": false 00:23:34.391 }, 00:23:34.391 "memory_domains": [ 00:23:34.391 { 00:23:34.391 "dma_device_id": "system", 00:23:34.391 "dma_device_type": 1 00:23:34.391 }, 00:23:34.391 { 00:23:34.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.391 "dma_device_type": 2 00:23:34.391 } 00:23:34.391 ], 00:23:34.391 "driver_specific": {} 00:23:34.391 }' 00:23:34.391 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:34.391 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:34.391 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:34.391 00:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:34.391 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:34.650 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:34.650 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:34.650 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:34.650 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:34.650 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:34.650 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:34.650 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:34.650 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:34.650 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:34.650 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:34.909 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:34.909 "name": "BaseBdev2", 00:23:34.909 "aliases": [ 00:23:34.909 "0d45580e-30d7-48de-a04a-dce3b0a25ba1" 00:23:34.909 ], 00:23:34.909 "product_name": "Malloc disk", 00:23:34.909 "block_size": 512, 00:23:34.909 "num_blocks": 65536, 00:23:34.909 "uuid": "0d45580e-30d7-48de-a04a-dce3b0a25ba1", 00:23:34.909 "assigned_rate_limits": { 00:23:34.909 "rw_ios_per_sec": 0, 00:23:34.909 "rw_mbytes_per_sec": 0, 00:23:34.909 "r_mbytes_per_sec": 0, 00:23:34.909 "w_mbytes_per_sec": 0 00:23:34.909 }, 00:23:34.909 "claimed": true, 00:23:34.909 "claim_type": "exclusive_write", 00:23:34.909 "zoned": false, 00:23:34.909 "supported_io_types": { 00:23:34.909 "read": true, 00:23:34.909 "write": true, 00:23:34.909 "unmap": true, 00:23:34.909 "flush": true, 00:23:34.909 "reset": true, 00:23:34.909 "nvme_admin": false, 00:23:34.909 "nvme_io": false, 00:23:34.909 "nvme_io_md": false, 00:23:34.909 "write_zeroes": true, 00:23:34.909 "zcopy": true, 00:23:34.909 "get_zone_info": false, 00:23:34.909 "zone_management": false, 00:23:34.909 "zone_append": false, 00:23:34.909 "compare": false, 00:23:34.909 "compare_and_write": false, 00:23:34.909 "abort": true, 00:23:34.909 "seek_hole": false, 00:23:34.909 "seek_data": false, 00:23:34.909 "copy": true, 00:23:34.909 "nvme_iov_md": false 00:23:34.909 }, 00:23:34.909 "memory_domains": [ 00:23:34.909 { 00:23:34.909 "dma_device_id": "system", 00:23:34.909 "dma_device_type": 1 00:23:34.909 }, 00:23:34.909 { 00:23:34.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.909 "dma_device_type": 2 00:23:34.909 } 00:23:34.909 ], 00:23:34.909 "driver_specific": {} 00:23:34.909 }' 00:23:34.909 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:35.169 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:35.169 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:35.169 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:35.169 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:35.169 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:35.169 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:35.169 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:35.169 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:35.169 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:35.428 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:35.428 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:35.428 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:35.428 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:35.428 00:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:35.687 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:35.687 "name": "BaseBdev3", 00:23:35.687 "aliases": [ 00:23:35.687 "248029b1-8513-4b0c-b62d-5ae293b1489c" 00:23:35.687 ], 00:23:35.687 "product_name": "Malloc disk", 00:23:35.687 "block_size": 512, 00:23:35.687 "num_blocks": 65536, 00:23:35.687 "uuid": "248029b1-8513-4b0c-b62d-5ae293b1489c", 00:23:35.687 "assigned_rate_limits": { 00:23:35.687 "rw_ios_per_sec": 0, 00:23:35.687 "rw_mbytes_per_sec": 0, 00:23:35.687 "r_mbytes_per_sec": 0, 00:23:35.687 "w_mbytes_per_sec": 0 00:23:35.687 }, 00:23:35.687 "claimed": true, 00:23:35.687 "claim_type": "exclusive_write", 00:23:35.687 "zoned": false, 00:23:35.687 "supported_io_types": { 00:23:35.687 "read": true, 00:23:35.687 "write": true, 00:23:35.687 "unmap": true, 00:23:35.687 "flush": true, 00:23:35.687 "reset": true, 00:23:35.687 "nvme_admin": false, 00:23:35.687 "nvme_io": false, 00:23:35.687 "nvme_io_md": false, 00:23:35.687 "write_zeroes": true, 00:23:35.687 "zcopy": true, 00:23:35.687 "get_zone_info": false, 00:23:35.687 "zone_management": false, 00:23:35.687 "zone_append": false, 00:23:35.687 "compare": false, 00:23:35.687 "compare_and_write": false, 00:23:35.687 "abort": true, 00:23:35.687 "seek_hole": false, 00:23:35.687 "seek_data": false, 00:23:35.687 "copy": true, 00:23:35.687 "nvme_iov_md": false 00:23:35.687 }, 00:23:35.687 "memory_domains": [ 00:23:35.687 { 00:23:35.687 "dma_device_id": "system", 00:23:35.687 "dma_device_type": 1 00:23:35.687 }, 00:23:35.687 { 00:23:35.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.688 "dma_device_type": 2 00:23:35.688 } 00:23:35.688 ], 00:23:35.688 "driver_specific": {} 00:23:35.688 }' 00:23:35.688 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:35.688 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:35.688 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:35.688 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:35.688 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:35.688 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:35.688 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:35.947 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:35.947 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:35.947 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:35.947 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:35.947 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:35.947 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:35.947 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:35.947 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:36.206 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:36.206 "name": "BaseBdev4", 00:23:36.206 "aliases": [ 00:23:36.206 "8b07454a-253e-437d-b55b-2c2532b3e06b" 00:23:36.206 ], 00:23:36.206 "product_name": "Malloc disk", 00:23:36.206 "block_size": 512, 00:23:36.206 "num_blocks": 65536, 00:23:36.206 "uuid": "8b07454a-253e-437d-b55b-2c2532b3e06b", 00:23:36.206 "assigned_rate_limits": { 00:23:36.206 "rw_ios_per_sec": 0, 00:23:36.206 "rw_mbytes_per_sec": 0, 00:23:36.206 "r_mbytes_per_sec": 0, 00:23:36.206 "w_mbytes_per_sec": 0 00:23:36.206 }, 00:23:36.206 "claimed": true, 00:23:36.206 "claim_type": "exclusive_write", 00:23:36.206 "zoned": false, 00:23:36.206 "supported_io_types": { 00:23:36.206 "read": true, 00:23:36.206 "write": true, 00:23:36.206 "unmap": true, 00:23:36.206 "flush": true, 00:23:36.206 "reset": true, 00:23:36.206 "nvme_admin": false, 00:23:36.206 "nvme_io": false, 00:23:36.206 "nvme_io_md": false, 00:23:36.206 "write_zeroes": true, 00:23:36.206 "zcopy": true, 00:23:36.206 "get_zone_info": false, 00:23:36.206 "zone_management": false, 00:23:36.206 "zone_append": false, 00:23:36.206 "compare": false, 00:23:36.206 "compare_and_write": false, 00:23:36.206 "abort": true, 00:23:36.206 "seek_hole": false, 00:23:36.206 "seek_data": false, 00:23:36.206 "copy": true, 00:23:36.206 "nvme_iov_md": false 00:23:36.206 }, 00:23:36.206 "memory_domains": [ 00:23:36.206 { 00:23:36.206 "dma_device_id": "system", 00:23:36.206 "dma_device_type": 1 00:23:36.206 }, 00:23:36.206 { 00:23:36.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.206 "dma_device_type": 2 00:23:36.206 } 00:23:36.206 ], 00:23:36.206 "driver_specific": {} 00:23:36.206 }' 00:23:36.206 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:36.206 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:36.466 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:36.466 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:36.466 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:36.466 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:36.466 00:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:36.466 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:36.466 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:36.466 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:36.466 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:36.725 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:36.725 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:36.725 [2024-07-25 00:49:59.356761] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:36.725 [2024-07-25 00:49:59.356788] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:36.725 [2024-07-25 00:49:59.356830] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.985 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.244 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.244 "name": "Existed_Raid", 00:23:37.244 "uuid": "21db692f-8f9d-4401-bb04-8249d0267330", 00:23:37.244 "strip_size_kb": 64, 00:23:37.244 "state": "offline", 00:23:37.244 "raid_level": "raid0", 00:23:37.244 "superblock": false, 00:23:37.244 "num_base_bdevs": 4, 00:23:37.244 "num_base_bdevs_discovered": 3, 00:23:37.244 "num_base_bdevs_operational": 3, 00:23:37.244 "base_bdevs_list": [ 00:23:37.244 { 00:23:37.244 "name": null, 00:23:37.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.244 "is_configured": false, 00:23:37.244 "data_offset": 0, 00:23:37.244 "data_size": 65536 00:23:37.244 }, 00:23:37.244 { 00:23:37.244 "name": "BaseBdev2", 00:23:37.244 "uuid": "0d45580e-30d7-48de-a04a-dce3b0a25ba1", 00:23:37.244 "is_configured": true, 00:23:37.244 "data_offset": 0, 00:23:37.244 "data_size": 65536 00:23:37.244 }, 00:23:37.244 { 00:23:37.244 "name": "BaseBdev3", 00:23:37.244 "uuid": "248029b1-8513-4b0c-b62d-5ae293b1489c", 00:23:37.244 "is_configured": true, 00:23:37.244 "data_offset": 0, 00:23:37.244 "data_size": 65536 00:23:37.244 }, 00:23:37.244 { 00:23:37.244 "name": "BaseBdev4", 00:23:37.244 "uuid": "8b07454a-253e-437d-b55b-2c2532b3e06b", 00:23:37.244 "is_configured": true, 00:23:37.244 "data_offset": 0, 00:23:37.244 "data_size": 65536 00:23:37.244 } 00:23:37.244 ] 00:23:37.244 }' 00:23:37.244 00:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.244 00:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.813 00:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:37.813 00:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:37.813 00:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.813 00:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:37.813 00:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:37.813 00:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:37.813 00:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:38.072 [2024-07-25 00:50:00.678951] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:38.331 00:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:38.331 00:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:38.331 00:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.331 00:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:38.590 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:38.590 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:38.590 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:38.590 [2024-07-25 00:50:01.226059] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:38.849 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:38.849 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:38.849 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.849 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:39.108 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:39.108 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:39.108 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:39.365 [2024-07-25 00:50:01.814756] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:39.365 [2024-07-25 00:50:01.814802] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:23:39.365 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:39.365 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:39.366 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.366 00:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:39.624 00:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:39.624 00:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:39.624 00:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:39.624 00:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:39.624 00:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:39.624 00:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:39.882 BaseBdev2 00:23:39.882 00:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:39.882 00:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:39.882 00:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:39.882 00:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:39.882 00:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:39.882 00:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:39.882 00:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:40.140 00:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:40.399 [ 00:23:40.399 { 00:23:40.399 "name": "BaseBdev2", 00:23:40.399 "aliases": [ 00:23:40.399 "2a153b2e-d90e-4988-829a-4f6534e035be" 00:23:40.399 ], 00:23:40.399 "product_name": "Malloc disk", 00:23:40.399 "block_size": 512, 00:23:40.399 "num_blocks": 65536, 00:23:40.399 "uuid": "2a153b2e-d90e-4988-829a-4f6534e035be", 00:23:40.399 "assigned_rate_limits": { 00:23:40.399 "rw_ios_per_sec": 0, 00:23:40.399 "rw_mbytes_per_sec": 0, 00:23:40.399 "r_mbytes_per_sec": 0, 00:23:40.399 "w_mbytes_per_sec": 0 00:23:40.399 }, 00:23:40.399 "claimed": false, 00:23:40.399 "zoned": false, 00:23:40.399 "supported_io_types": { 00:23:40.399 "read": true, 00:23:40.399 "write": true, 00:23:40.399 "unmap": true, 00:23:40.399 "flush": true, 00:23:40.399 "reset": true, 00:23:40.399 "nvme_admin": false, 00:23:40.399 "nvme_io": false, 00:23:40.399 "nvme_io_md": false, 00:23:40.399 "write_zeroes": true, 00:23:40.399 "zcopy": true, 00:23:40.399 "get_zone_info": false, 00:23:40.399 "zone_management": false, 00:23:40.399 "zone_append": false, 00:23:40.399 "compare": false, 00:23:40.399 "compare_and_write": false, 00:23:40.399 "abort": true, 00:23:40.399 "seek_hole": false, 00:23:40.399 "seek_data": false, 00:23:40.399 "copy": true, 00:23:40.399 "nvme_iov_md": false 00:23:40.399 }, 00:23:40.399 "memory_domains": [ 00:23:40.399 { 00:23:40.399 "dma_device_id": "system", 00:23:40.399 "dma_device_type": 1 00:23:40.399 }, 00:23:40.399 { 00:23:40.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.399 "dma_device_type": 2 00:23:40.399 } 00:23:40.399 ], 00:23:40.399 "driver_specific": {} 00:23:40.399 } 00:23:40.399 ] 00:23:40.399 00:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:40.399 00:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:40.399 00:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:40.399 00:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:40.399 BaseBdev3 00:23:40.657 00:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:40.657 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:40.658 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:40.658 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:40.658 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:40.658 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:40.658 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:40.658 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:40.915 [ 00:23:40.915 { 00:23:40.915 "name": "BaseBdev3", 00:23:40.915 "aliases": [ 00:23:40.915 "20956bb9-4546-426c-850f-729c7276d73f" 00:23:40.915 ], 00:23:40.915 "product_name": "Malloc disk", 00:23:40.915 "block_size": 512, 00:23:40.915 "num_blocks": 65536, 00:23:40.915 "uuid": "20956bb9-4546-426c-850f-729c7276d73f", 00:23:40.915 "assigned_rate_limits": { 00:23:40.915 "rw_ios_per_sec": 0, 00:23:40.915 "rw_mbytes_per_sec": 0, 00:23:40.915 "r_mbytes_per_sec": 0, 00:23:40.915 "w_mbytes_per_sec": 0 00:23:40.915 }, 00:23:40.915 "claimed": false, 00:23:40.915 "zoned": false, 00:23:40.915 "supported_io_types": { 00:23:40.915 "read": true, 00:23:40.915 "write": true, 00:23:40.915 "unmap": true, 00:23:40.915 "flush": true, 00:23:40.915 "reset": true, 00:23:40.915 "nvme_admin": false, 00:23:40.915 "nvme_io": false, 00:23:40.915 "nvme_io_md": false, 00:23:40.915 "write_zeroes": true, 00:23:40.915 "zcopy": true, 00:23:40.915 "get_zone_info": false, 00:23:40.915 "zone_management": false, 00:23:40.915 "zone_append": false, 00:23:40.915 "compare": false, 00:23:40.915 "compare_and_write": false, 00:23:40.915 "abort": true, 00:23:40.915 "seek_hole": false, 00:23:40.915 "seek_data": false, 00:23:40.915 "copy": true, 00:23:40.915 "nvme_iov_md": false 00:23:40.915 }, 00:23:40.915 "memory_domains": [ 00:23:40.915 { 00:23:40.915 "dma_device_id": "system", 00:23:40.915 "dma_device_type": 1 00:23:40.915 }, 00:23:40.915 { 00:23:40.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.915 "dma_device_type": 2 00:23:40.915 } 00:23:40.915 ], 00:23:40.915 "driver_specific": {} 00:23:40.915 } 00:23:40.915 ] 00:23:40.915 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:40.915 00:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:40.915 00:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:40.915 00:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:41.173 BaseBdev4 00:23:41.173 00:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:41.173 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:41.173 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:41.173 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:41.173 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:41.173 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:41.173 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:41.173 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:41.435 [ 00:23:41.435 { 00:23:41.435 "name": "BaseBdev4", 00:23:41.435 "aliases": [ 00:23:41.435 "f3c77aa0-ec33-47bf-908f-f6250e84a8d8" 00:23:41.435 ], 00:23:41.435 "product_name": "Malloc disk", 00:23:41.435 "block_size": 512, 00:23:41.435 "num_blocks": 65536, 00:23:41.435 "uuid": "f3c77aa0-ec33-47bf-908f-f6250e84a8d8", 00:23:41.435 "assigned_rate_limits": { 00:23:41.435 "rw_ios_per_sec": 0, 00:23:41.435 "rw_mbytes_per_sec": 0, 00:23:41.435 "r_mbytes_per_sec": 0, 00:23:41.435 "w_mbytes_per_sec": 0 00:23:41.435 }, 00:23:41.435 "claimed": false, 00:23:41.435 "zoned": false, 00:23:41.435 "supported_io_types": { 00:23:41.435 "read": true, 00:23:41.435 "write": true, 00:23:41.435 "unmap": true, 00:23:41.436 "flush": true, 00:23:41.436 "reset": true, 00:23:41.436 "nvme_admin": false, 00:23:41.436 "nvme_io": false, 00:23:41.436 "nvme_io_md": false, 00:23:41.436 "write_zeroes": true, 00:23:41.436 "zcopy": true, 00:23:41.436 "get_zone_info": false, 00:23:41.436 "zone_management": false, 00:23:41.436 "zone_append": false, 00:23:41.436 "compare": false, 00:23:41.436 "compare_and_write": false, 00:23:41.436 "abort": true, 00:23:41.436 "seek_hole": false, 00:23:41.436 "seek_data": false, 00:23:41.436 "copy": true, 00:23:41.436 "nvme_iov_md": false 00:23:41.436 }, 00:23:41.436 "memory_domains": [ 00:23:41.436 { 00:23:41.436 "dma_device_id": "system", 00:23:41.436 "dma_device_type": 1 00:23:41.436 }, 00:23:41.436 { 00:23:41.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:41.436 "dma_device_type": 2 00:23:41.436 } 00:23:41.436 ], 00:23:41.436 "driver_specific": {} 00:23:41.436 } 00:23:41.436 ] 00:23:41.436 00:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:41.436 00:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:41.436 00:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:41.436 00:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:41.728 [2024-07-25 00:50:04.143937] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:41.728 [2024-07-25 00:50:04.143996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:41.728 [2024-07-25 00:50:04.144014] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:41.728 [2024-07-25 00:50:04.145841] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:41.728 [2024-07-25 00:50:04.145891] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.728 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.996 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.996 "name": "Existed_Raid", 00:23:41.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.996 "strip_size_kb": 64, 00:23:41.996 "state": "configuring", 00:23:41.996 "raid_level": "raid0", 00:23:41.996 "superblock": false, 00:23:41.996 "num_base_bdevs": 4, 00:23:41.996 "num_base_bdevs_discovered": 3, 00:23:41.996 "num_base_bdevs_operational": 4, 00:23:41.996 "base_bdevs_list": [ 00:23:41.996 { 00:23:41.996 "name": "BaseBdev1", 00:23:41.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.996 "is_configured": false, 00:23:41.996 "data_offset": 0, 00:23:41.996 "data_size": 0 00:23:41.996 }, 00:23:41.996 { 00:23:41.996 "name": "BaseBdev2", 00:23:41.996 "uuid": "2a153b2e-d90e-4988-829a-4f6534e035be", 00:23:41.996 "is_configured": true, 00:23:41.996 "data_offset": 0, 00:23:41.996 "data_size": 65536 00:23:41.996 }, 00:23:41.996 { 00:23:41.996 "name": "BaseBdev3", 00:23:41.996 "uuid": "20956bb9-4546-426c-850f-729c7276d73f", 00:23:41.996 "is_configured": true, 00:23:41.996 "data_offset": 0, 00:23:41.996 "data_size": 65536 00:23:41.996 }, 00:23:41.996 { 00:23:41.996 "name": "BaseBdev4", 00:23:41.996 "uuid": "f3c77aa0-ec33-47bf-908f-f6250e84a8d8", 00:23:41.996 "is_configured": true, 00:23:41.996 "data_offset": 0, 00:23:41.996 "data_size": 65536 00:23:41.996 } 00:23:41.996 ] 00:23:41.996 }' 00:23:41.996 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.996 00:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.565 00:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:42.824 [2024-07-25 00:50:05.232206] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.824 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.082 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.082 "name": "Existed_Raid", 00:23:43.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.082 "strip_size_kb": 64, 00:23:43.082 "state": "configuring", 00:23:43.082 "raid_level": "raid0", 00:23:43.082 "superblock": false, 00:23:43.082 "num_base_bdevs": 4, 00:23:43.082 "num_base_bdevs_discovered": 2, 00:23:43.082 "num_base_bdevs_operational": 4, 00:23:43.082 "base_bdevs_list": [ 00:23:43.082 { 00:23:43.082 "name": "BaseBdev1", 00:23:43.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.082 "is_configured": false, 00:23:43.082 "data_offset": 0, 00:23:43.082 "data_size": 0 00:23:43.082 }, 00:23:43.082 { 00:23:43.082 "name": null, 00:23:43.082 "uuid": "2a153b2e-d90e-4988-829a-4f6534e035be", 00:23:43.082 "is_configured": false, 00:23:43.082 "data_offset": 0, 00:23:43.082 "data_size": 65536 00:23:43.082 }, 00:23:43.082 { 00:23:43.082 "name": "BaseBdev3", 00:23:43.082 "uuid": "20956bb9-4546-426c-850f-729c7276d73f", 00:23:43.082 "is_configured": true, 00:23:43.082 "data_offset": 0, 00:23:43.082 "data_size": 65536 00:23:43.082 }, 00:23:43.082 { 00:23:43.082 "name": "BaseBdev4", 00:23:43.082 "uuid": "f3c77aa0-ec33-47bf-908f-f6250e84a8d8", 00:23:43.082 "is_configured": true, 00:23:43.082 "data_offset": 0, 00:23:43.082 "data_size": 65536 00:23:43.082 } 00:23:43.082 ] 00:23:43.082 }' 00:23:43.082 00:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.082 00:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.650 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.650 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:43.650 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:43.650 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:43.909 [2024-07-25 00:50:06.474306] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:43.909 BaseBdev1 00:23:43.909 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:43.909 00:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:43.909 00:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:43.909 00:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:43.909 00:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:43.909 00:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:43.909 00:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.168 00:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:44.427 [ 00:23:44.427 { 00:23:44.427 "name": "BaseBdev1", 00:23:44.427 "aliases": [ 00:23:44.427 "45c9d685-aa45-4e6b-bcb4-0b5c070931c2" 00:23:44.427 ], 00:23:44.427 "product_name": "Malloc disk", 00:23:44.427 "block_size": 512, 00:23:44.427 "num_blocks": 65536, 00:23:44.427 "uuid": "45c9d685-aa45-4e6b-bcb4-0b5c070931c2", 00:23:44.427 "assigned_rate_limits": { 00:23:44.427 "rw_ios_per_sec": 0, 00:23:44.427 "rw_mbytes_per_sec": 0, 00:23:44.427 "r_mbytes_per_sec": 0, 00:23:44.427 "w_mbytes_per_sec": 0 00:23:44.427 }, 00:23:44.427 "claimed": true, 00:23:44.427 "claim_type": "exclusive_write", 00:23:44.427 "zoned": false, 00:23:44.427 "supported_io_types": { 00:23:44.427 "read": true, 00:23:44.427 "write": true, 00:23:44.427 "unmap": true, 00:23:44.427 "flush": true, 00:23:44.427 "reset": true, 00:23:44.427 "nvme_admin": false, 00:23:44.427 "nvme_io": false, 00:23:44.427 "nvme_io_md": false, 00:23:44.427 "write_zeroes": true, 00:23:44.427 "zcopy": true, 00:23:44.427 "get_zone_info": false, 00:23:44.427 "zone_management": false, 00:23:44.427 "zone_append": false, 00:23:44.427 "compare": false, 00:23:44.427 "compare_and_write": false, 00:23:44.427 "abort": true, 00:23:44.427 "seek_hole": false, 00:23:44.427 "seek_data": false, 00:23:44.427 "copy": true, 00:23:44.427 "nvme_iov_md": false 00:23:44.427 }, 00:23:44.427 "memory_domains": [ 00:23:44.427 { 00:23:44.427 "dma_device_id": "system", 00:23:44.427 "dma_device_type": 1 00:23:44.427 }, 00:23:44.427 { 00:23:44.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.427 "dma_device_type": 2 00:23:44.427 } 00:23:44.427 ], 00:23:44.427 "driver_specific": {} 00:23:44.427 } 00:23:44.427 ] 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.427 00:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.686 00:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:44.686 "name": "Existed_Raid", 00:23:44.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.686 "strip_size_kb": 64, 00:23:44.686 "state": "configuring", 00:23:44.686 "raid_level": "raid0", 00:23:44.686 "superblock": false, 00:23:44.686 "num_base_bdevs": 4, 00:23:44.686 "num_base_bdevs_discovered": 3, 00:23:44.686 "num_base_bdevs_operational": 4, 00:23:44.686 "base_bdevs_list": [ 00:23:44.686 { 00:23:44.686 "name": "BaseBdev1", 00:23:44.686 "uuid": "45c9d685-aa45-4e6b-bcb4-0b5c070931c2", 00:23:44.686 "is_configured": true, 00:23:44.686 "data_offset": 0, 00:23:44.686 "data_size": 65536 00:23:44.686 }, 00:23:44.686 { 00:23:44.686 "name": null, 00:23:44.686 "uuid": "2a153b2e-d90e-4988-829a-4f6534e035be", 00:23:44.686 "is_configured": false, 00:23:44.686 "data_offset": 0, 00:23:44.686 "data_size": 65536 00:23:44.686 }, 00:23:44.686 { 00:23:44.686 "name": "BaseBdev3", 00:23:44.686 "uuid": "20956bb9-4546-426c-850f-729c7276d73f", 00:23:44.686 "is_configured": true, 00:23:44.686 "data_offset": 0, 00:23:44.686 "data_size": 65536 00:23:44.686 }, 00:23:44.686 { 00:23:44.686 "name": "BaseBdev4", 00:23:44.686 "uuid": "f3c77aa0-ec33-47bf-908f-f6250e84a8d8", 00:23:44.686 "is_configured": true, 00:23:44.686 "data_offset": 0, 00:23:44.686 "data_size": 65536 00:23:44.686 } 00:23:44.686 ] 00:23:44.686 }' 00:23:44.686 00:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:44.686 00:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.255 00:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:45.255 00:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.513 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:45.513 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:45.772 [2024-07-25 00:50:08.256632] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:45.772 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:45.772 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:45.772 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:45.772 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:45.772 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:45.772 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:45.772 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.773 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.773 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.773 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.773 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.773 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.032 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:46.032 "name": "Existed_Raid", 00:23:46.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.032 "strip_size_kb": 64, 00:23:46.032 "state": "configuring", 00:23:46.032 "raid_level": "raid0", 00:23:46.032 "superblock": false, 00:23:46.032 "num_base_bdevs": 4, 00:23:46.032 "num_base_bdevs_discovered": 2, 00:23:46.032 "num_base_bdevs_operational": 4, 00:23:46.032 "base_bdevs_list": [ 00:23:46.032 { 00:23:46.032 "name": "BaseBdev1", 00:23:46.032 "uuid": "45c9d685-aa45-4e6b-bcb4-0b5c070931c2", 00:23:46.032 "is_configured": true, 00:23:46.032 "data_offset": 0, 00:23:46.032 "data_size": 65536 00:23:46.032 }, 00:23:46.032 { 00:23:46.032 "name": null, 00:23:46.032 "uuid": "2a153b2e-d90e-4988-829a-4f6534e035be", 00:23:46.032 "is_configured": false, 00:23:46.032 "data_offset": 0, 00:23:46.032 "data_size": 65536 00:23:46.032 }, 00:23:46.032 { 00:23:46.032 "name": null, 00:23:46.032 "uuid": "20956bb9-4546-426c-850f-729c7276d73f", 00:23:46.032 "is_configured": false, 00:23:46.032 "data_offset": 0, 00:23:46.032 "data_size": 65536 00:23:46.032 }, 00:23:46.032 { 00:23:46.032 "name": "BaseBdev4", 00:23:46.032 "uuid": "f3c77aa0-ec33-47bf-908f-f6250e84a8d8", 00:23:46.032 "is_configured": true, 00:23:46.032 "data_offset": 0, 00:23:46.032 "data_size": 65536 00:23:46.032 } 00:23:46.032 ] 00:23:46.032 }' 00:23:46.032 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:46.032 00:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.601 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.601 00:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:46.860 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:46.860 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:47.119 [2024-07-25 00:50:09.524882] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:47.119 "name": "Existed_Raid", 00:23:47.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.119 "strip_size_kb": 64, 00:23:47.119 "state": "configuring", 00:23:47.119 "raid_level": "raid0", 00:23:47.119 "superblock": false, 00:23:47.119 "num_base_bdevs": 4, 00:23:47.119 "num_base_bdevs_discovered": 3, 00:23:47.119 "num_base_bdevs_operational": 4, 00:23:47.119 "base_bdevs_list": [ 00:23:47.119 { 00:23:47.119 "name": "BaseBdev1", 00:23:47.119 "uuid": "45c9d685-aa45-4e6b-bcb4-0b5c070931c2", 00:23:47.119 "is_configured": true, 00:23:47.119 "data_offset": 0, 00:23:47.119 "data_size": 65536 00:23:47.119 }, 00:23:47.119 { 00:23:47.119 "name": null, 00:23:47.119 "uuid": "2a153b2e-d90e-4988-829a-4f6534e035be", 00:23:47.119 "is_configured": false, 00:23:47.119 "data_offset": 0, 00:23:47.119 "data_size": 65536 00:23:47.119 }, 00:23:47.119 { 00:23:47.119 "name": "BaseBdev3", 00:23:47.119 "uuid": "20956bb9-4546-426c-850f-729c7276d73f", 00:23:47.119 "is_configured": true, 00:23:47.119 "data_offset": 0, 00:23:47.119 "data_size": 65536 00:23:47.119 }, 00:23:47.119 { 00:23:47.119 "name": "BaseBdev4", 00:23:47.119 "uuid": "f3c77aa0-ec33-47bf-908f-f6250e84a8d8", 00:23:47.119 "is_configured": true, 00:23:47.119 "data_offset": 0, 00:23:47.119 "data_size": 65536 00:23:47.119 } 00:23:47.119 ] 00:23:47.119 }' 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:47.119 00:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.055 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:48.055 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.055 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:48.055 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:48.314 [2024-07-25 00:50:10.821141] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.314 00:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.573 00:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.573 "name": "Existed_Raid", 00:23:48.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.573 "strip_size_kb": 64, 00:23:48.573 "state": "configuring", 00:23:48.573 "raid_level": "raid0", 00:23:48.573 "superblock": false, 00:23:48.573 "num_base_bdevs": 4, 00:23:48.573 "num_base_bdevs_discovered": 2, 00:23:48.573 "num_base_bdevs_operational": 4, 00:23:48.573 "base_bdevs_list": [ 00:23:48.573 { 00:23:48.573 "name": null, 00:23:48.573 "uuid": "45c9d685-aa45-4e6b-bcb4-0b5c070931c2", 00:23:48.573 "is_configured": false, 00:23:48.573 "data_offset": 0, 00:23:48.573 "data_size": 65536 00:23:48.573 }, 00:23:48.573 { 00:23:48.573 "name": null, 00:23:48.573 "uuid": "2a153b2e-d90e-4988-829a-4f6534e035be", 00:23:48.573 "is_configured": false, 00:23:48.573 "data_offset": 0, 00:23:48.573 "data_size": 65536 00:23:48.573 }, 00:23:48.573 { 00:23:48.573 "name": "BaseBdev3", 00:23:48.573 "uuid": "20956bb9-4546-426c-850f-729c7276d73f", 00:23:48.573 "is_configured": true, 00:23:48.573 "data_offset": 0, 00:23:48.573 "data_size": 65536 00:23:48.573 }, 00:23:48.573 { 00:23:48.573 "name": "BaseBdev4", 00:23:48.573 "uuid": "f3c77aa0-ec33-47bf-908f-f6250e84a8d8", 00:23:48.573 "is_configured": true, 00:23:48.573 "data_offset": 0, 00:23:48.573 "data_size": 65536 00:23:48.573 } 00:23:48.573 ] 00:23:48.573 }' 00:23:48.573 00:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.573 00:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.142 00:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.142 00:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:49.402 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:49.402 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:49.660 [2024-07-25 00:50:12.251945] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.660 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.919 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:49.919 "name": "Existed_Raid", 00:23:49.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.919 "strip_size_kb": 64, 00:23:49.919 "state": "configuring", 00:23:49.919 "raid_level": "raid0", 00:23:49.919 "superblock": false, 00:23:49.919 "num_base_bdevs": 4, 00:23:49.919 "num_base_bdevs_discovered": 3, 00:23:49.919 "num_base_bdevs_operational": 4, 00:23:49.919 "base_bdevs_list": [ 00:23:49.919 { 00:23:49.919 "name": null, 00:23:49.919 "uuid": "45c9d685-aa45-4e6b-bcb4-0b5c070931c2", 00:23:49.919 "is_configured": false, 00:23:49.919 "data_offset": 0, 00:23:49.919 "data_size": 65536 00:23:49.919 }, 00:23:49.919 { 00:23:49.919 "name": "BaseBdev2", 00:23:49.919 "uuid": "2a153b2e-d90e-4988-829a-4f6534e035be", 00:23:49.919 "is_configured": true, 00:23:49.919 "data_offset": 0, 00:23:49.919 "data_size": 65536 00:23:49.919 }, 00:23:49.919 { 00:23:49.919 "name": "BaseBdev3", 00:23:49.919 "uuid": "20956bb9-4546-426c-850f-729c7276d73f", 00:23:49.919 "is_configured": true, 00:23:49.919 "data_offset": 0, 00:23:49.919 "data_size": 65536 00:23:49.919 }, 00:23:49.919 { 00:23:49.919 "name": "BaseBdev4", 00:23:49.919 "uuid": "f3c77aa0-ec33-47bf-908f-f6250e84a8d8", 00:23:49.919 "is_configured": true, 00:23:49.919 "data_offset": 0, 00:23:49.919 "data_size": 65536 00:23:49.919 } 00:23:49.919 ] 00:23:49.919 }' 00:23:49.919 00:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:49.919 00:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.486 00:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:50.486 00:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.744 00:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:50.744 00:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.744 00:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:51.001 00:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 45c9d685-aa45-4e6b-bcb4-0b5c070931c2 00:23:51.259 [2024-07-25 00:50:13.673793] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:51.259 [2024-07-25 00:50:13.673989] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:23:51.259 [2024-07-25 00:50:13.674031] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:51.259 [2024-07-25 00:50:13.674216] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:51.259 [2024-07-25 00:50:13.674655] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:23:51.259 [2024-07-25 00:50:13.674768] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:23:51.259 [2024-07-25 00:50:13.675085] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.259 NewBaseBdev 00:23:51.259 00:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:51.259 00:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:23:51.259 00:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:51.259 00:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:51.259 00:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:51.259 00:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:51.259 00:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:51.517 00:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:51.517 [ 00:23:51.517 { 00:23:51.517 "name": "NewBaseBdev", 00:23:51.517 "aliases": [ 00:23:51.517 "45c9d685-aa45-4e6b-bcb4-0b5c070931c2" 00:23:51.517 ], 00:23:51.517 "product_name": "Malloc disk", 00:23:51.517 "block_size": 512, 00:23:51.517 "num_blocks": 65536, 00:23:51.517 "uuid": "45c9d685-aa45-4e6b-bcb4-0b5c070931c2", 00:23:51.517 "assigned_rate_limits": { 00:23:51.517 "rw_ios_per_sec": 0, 00:23:51.517 "rw_mbytes_per_sec": 0, 00:23:51.517 "r_mbytes_per_sec": 0, 00:23:51.517 "w_mbytes_per_sec": 0 00:23:51.517 }, 00:23:51.517 "claimed": true, 00:23:51.517 "claim_type": "exclusive_write", 00:23:51.517 "zoned": false, 00:23:51.517 "supported_io_types": { 00:23:51.517 "read": true, 00:23:51.517 "write": true, 00:23:51.517 "unmap": true, 00:23:51.517 "flush": true, 00:23:51.517 "reset": true, 00:23:51.517 "nvme_admin": false, 00:23:51.517 "nvme_io": false, 00:23:51.517 "nvme_io_md": false, 00:23:51.517 "write_zeroes": true, 00:23:51.517 "zcopy": true, 00:23:51.517 "get_zone_info": false, 00:23:51.517 "zone_management": false, 00:23:51.517 "zone_append": false, 00:23:51.517 "compare": false, 00:23:51.517 "compare_and_write": false, 00:23:51.517 "abort": true, 00:23:51.517 "seek_hole": false, 00:23:51.517 "seek_data": false, 00:23:51.517 "copy": true, 00:23:51.517 "nvme_iov_md": false 00:23:51.517 }, 00:23:51.517 "memory_domains": [ 00:23:51.517 { 00:23:51.517 "dma_device_id": "system", 00:23:51.517 "dma_device_type": 1 00:23:51.517 }, 00:23:51.517 { 00:23:51.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.517 "dma_device_type": 2 00:23:51.517 } 00:23:51.517 ], 00:23:51.517 "driver_specific": {} 00:23:51.517 } 00:23:51.517 ] 00:23:51.517 00:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:51.517 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:51.517 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:51.518 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:51.518 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:51.518 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:51.518 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.518 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.518 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.518 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.518 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.518 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.518 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.776 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.776 "name": "Existed_Raid", 00:23:51.776 "uuid": "10b2e911-7884-40bc-ab22-ffd9286291f8", 00:23:51.776 "strip_size_kb": 64, 00:23:51.776 "state": "online", 00:23:51.776 "raid_level": "raid0", 00:23:51.776 "superblock": false, 00:23:51.776 "num_base_bdevs": 4, 00:23:51.776 "num_base_bdevs_discovered": 4, 00:23:51.776 "num_base_bdevs_operational": 4, 00:23:51.776 "base_bdevs_list": [ 00:23:51.776 { 00:23:51.776 "name": "NewBaseBdev", 00:23:51.776 "uuid": "45c9d685-aa45-4e6b-bcb4-0b5c070931c2", 00:23:51.776 "is_configured": true, 00:23:51.776 "data_offset": 0, 00:23:51.776 "data_size": 65536 00:23:51.776 }, 00:23:51.776 { 00:23:51.776 "name": "BaseBdev2", 00:23:51.776 "uuid": "2a153b2e-d90e-4988-829a-4f6534e035be", 00:23:51.776 "is_configured": true, 00:23:51.776 "data_offset": 0, 00:23:51.776 "data_size": 65536 00:23:51.776 }, 00:23:51.776 { 00:23:51.776 "name": "BaseBdev3", 00:23:51.776 "uuid": "20956bb9-4546-426c-850f-729c7276d73f", 00:23:51.776 "is_configured": true, 00:23:51.776 "data_offset": 0, 00:23:51.776 "data_size": 65536 00:23:51.777 }, 00:23:51.777 { 00:23:51.777 "name": "BaseBdev4", 00:23:51.777 "uuid": "f3c77aa0-ec33-47bf-908f-f6250e84a8d8", 00:23:51.777 "is_configured": true, 00:23:51.777 "data_offset": 0, 00:23:51.777 "data_size": 65536 00:23:51.777 } 00:23:51.777 ] 00:23:51.777 }' 00:23:51.777 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.777 00:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.344 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:52.344 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:52.344 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:52.344 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:52.344 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:52.344 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:52.345 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:52.345 00:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:52.604 [2024-07-25 00:50:15.240829] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.863 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:52.863 "name": "Existed_Raid", 00:23:52.863 "aliases": [ 00:23:52.863 "10b2e911-7884-40bc-ab22-ffd9286291f8" 00:23:52.863 ], 00:23:52.863 "product_name": "Raid Volume", 00:23:52.863 "block_size": 512, 00:23:52.863 "num_blocks": 262144, 00:23:52.863 "uuid": "10b2e911-7884-40bc-ab22-ffd9286291f8", 00:23:52.863 "assigned_rate_limits": { 00:23:52.863 "rw_ios_per_sec": 0, 00:23:52.863 "rw_mbytes_per_sec": 0, 00:23:52.863 "r_mbytes_per_sec": 0, 00:23:52.863 "w_mbytes_per_sec": 0 00:23:52.863 }, 00:23:52.863 "claimed": false, 00:23:52.863 "zoned": false, 00:23:52.863 "supported_io_types": { 00:23:52.863 "read": true, 00:23:52.863 "write": true, 00:23:52.863 "unmap": true, 00:23:52.863 "flush": true, 00:23:52.863 "reset": true, 00:23:52.863 "nvme_admin": false, 00:23:52.863 "nvme_io": false, 00:23:52.863 "nvme_io_md": false, 00:23:52.863 "write_zeroes": true, 00:23:52.863 "zcopy": false, 00:23:52.863 "get_zone_info": false, 00:23:52.863 "zone_management": false, 00:23:52.863 "zone_append": false, 00:23:52.863 "compare": false, 00:23:52.863 "compare_and_write": false, 00:23:52.863 "abort": false, 00:23:52.863 "seek_hole": false, 00:23:52.863 "seek_data": false, 00:23:52.863 "copy": false, 00:23:52.863 "nvme_iov_md": false 00:23:52.863 }, 00:23:52.863 "memory_domains": [ 00:23:52.863 { 00:23:52.863 "dma_device_id": "system", 00:23:52.863 "dma_device_type": 1 00:23:52.863 }, 00:23:52.863 { 00:23:52.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.863 "dma_device_type": 2 00:23:52.863 }, 00:23:52.863 { 00:23:52.863 "dma_device_id": "system", 00:23:52.863 "dma_device_type": 1 00:23:52.863 }, 00:23:52.863 { 00:23:52.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.863 "dma_device_type": 2 00:23:52.863 }, 00:23:52.863 { 00:23:52.863 "dma_device_id": "system", 00:23:52.863 "dma_device_type": 1 00:23:52.863 }, 00:23:52.863 { 00:23:52.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.863 "dma_device_type": 2 00:23:52.863 }, 00:23:52.863 { 00:23:52.863 "dma_device_id": "system", 00:23:52.863 "dma_device_type": 1 00:23:52.863 }, 00:23:52.863 { 00:23:52.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.863 "dma_device_type": 2 00:23:52.863 } 00:23:52.863 ], 00:23:52.863 "driver_specific": { 00:23:52.863 "raid": { 00:23:52.863 "uuid": "10b2e911-7884-40bc-ab22-ffd9286291f8", 00:23:52.863 "strip_size_kb": 64, 00:23:52.863 "state": "online", 00:23:52.863 "raid_level": "raid0", 00:23:52.863 "superblock": false, 00:23:52.863 "num_base_bdevs": 4, 00:23:52.863 "num_base_bdevs_discovered": 4, 00:23:52.863 "num_base_bdevs_operational": 4, 00:23:52.863 "base_bdevs_list": [ 00:23:52.863 { 00:23:52.863 "name": "NewBaseBdev", 00:23:52.863 "uuid": "45c9d685-aa45-4e6b-bcb4-0b5c070931c2", 00:23:52.863 "is_configured": true, 00:23:52.863 "data_offset": 0, 00:23:52.863 "data_size": 65536 00:23:52.863 }, 00:23:52.863 { 00:23:52.863 "name": "BaseBdev2", 00:23:52.863 "uuid": "2a153b2e-d90e-4988-829a-4f6534e035be", 00:23:52.863 "is_configured": true, 00:23:52.863 "data_offset": 0, 00:23:52.863 "data_size": 65536 00:23:52.863 }, 00:23:52.863 { 00:23:52.863 "name": "BaseBdev3", 00:23:52.863 "uuid": "20956bb9-4546-426c-850f-729c7276d73f", 00:23:52.863 "is_configured": true, 00:23:52.863 "data_offset": 0, 00:23:52.863 "data_size": 65536 00:23:52.863 }, 00:23:52.863 { 00:23:52.863 "name": "BaseBdev4", 00:23:52.863 "uuid": "f3c77aa0-ec33-47bf-908f-f6250e84a8d8", 00:23:52.863 "is_configured": true, 00:23:52.863 "data_offset": 0, 00:23:52.863 "data_size": 65536 00:23:52.863 } 00:23:52.863 ] 00:23:52.863 } 00:23:52.863 } 00:23:52.863 }' 00:23:52.863 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:52.864 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:52.864 BaseBdev2 00:23:52.864 BaseBdev3 00:23:52.864 BaseBdev4' 00:23:52.864 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:52.864 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:52.864 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:52.864 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:52.864 "name": "NewBaseBdev", 00:23:52.864 "aliases": [ 00:23:52.864 "45c9d685-aa45-4e6b-bcb4-0b5c070931c2" 00:23:52.864 ], 00:23:52.864 "product_name": "Malloc disk", 00:23:52.864 "block_size": 512, 00:23:52.864 "num_blocks": 65536, 00:23:52.864 "uuid": "45c9d685-aa45-4e6b-bcb4-0b5c070931c2", 00:23:52.864 "assigned_rate_limits": { 00:23:52.864 "rw_ios_per_sec": 0, 00:23:52.864 "rw_mbytes_per_sec": 0, 00:23:52.864 "r_mbytes_per_sec": 0, 00:23:52.864 "w_mbytes_per_sec": 0 00:23:52.864 }, 00:23:52.864 "claimed": true, 00:23:52.864 "claim_type": "exclusive_write", 00:23:52.864 "zoned": false, 00:23:52.864 "supported_io_types": { 00:23:52.864 "read": true, 00:23:52.864 "write": true, 00:23:52.864 "unmap": true, 00:23:52.864 "flush": true, 00:23:52.864 "reset": true, 00:23:52.864 "nvme_admin": false, 00:23:52.864 "nvme_io": false, 00:23:52.864 "nvme_io_md": false, 00:23:52.864 "write_zeroes": true, 00:23:52.864 "zcopy": true, 00:23:52.864 "get_zone_info": false, 00:23:52.864 "zone_management": false, 00:23:52.864 "zone_append": false, 00:23:52.864 "compare": false, 00:23:52.864 "compare_and_write": false, 00:23:52.864 "abort": true, 00:23:52.864 "seek_hole": false, 00:23:52.864 "seek_data": false, 00:23:52.864 "copy": true, 00:23:52.864 "nvme_iov_md": false 00:23:52.864 }, 00:23:52.864 "memory_domains": [ 00:23:52.864 { 00:23:52.864 "dma_device_id": "system", 00:23:52.864 "dma_device_type": 1 00:23:52.864 }, 00:23:52.864 { 00:23:52.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.864 "dma_device_type": 2 00:23:52.864 } 00:23:52.864 ], 00:23:52.864 "driver_specific": {} 00:23:52.864 }' 00:23:52.864 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.122 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.122 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:53.122 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.122 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.122 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:53.122 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.122 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.122 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:53.122 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.122 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.381 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:53.381 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:53.381 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:53.381 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:53.381 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:53.381 "name": "BaseBdev2", 00:23:53.381 "aliases": [ 00:23:53.381 "2a153b2e-d90e-4988-829a-4f6534e035be" 00:23:53.381 ], 00:23:53.381 "product_name": "Malloc disk", 00:23:53.381 "block_size": 512, 00:23:53.381 "num_blocks": 65536, 00:23:53.381 "uuid": "2a153b2e-d90e-4988-829a-4f6534e035be", 00:23:53.381 "assigned_rate_limits": { 00:23:53.381 "rw_ios_per_sec": 0, 00:23:53.381 "rw_mbytes_per_sec": 0, 00:23:53.381 "r_mbytes_per_sec": 0, 00:23:53.381 "w_mbytes_per_sec": 0 00:23:53.381 }, 00:23:53.381 "claimed": true, 00:23:53.381 "claim_type": "exclusive_write", 00:23:53.381 "zoned": false, 00:23:53.381 "supported_io_types": { 00:23:53.381 "read": true, 00:23:53.381 "write": true, 00:23:53.381 "unmap": true, 00:23:53.381 "flush": true, 00:23:53.381 "reset": true, 00:23:53.381 "nvme_admin": false, 00:23:53.381 "nvme_io": false, 00:23:53.381 "nvme_io_md": false, 00:23:53.381 "write_zeroes": true, 00:23:53.381 "zcopy": true, 00:23:53.381 "get_zone_info": false, 00:23:53.381 "zone_management": false, 00:23:53.381 "zone_append": false, 00:23:53.381 "compare": false, 00:23:53.381 "compare_and_write": false, 00:23:53.381 "abort": true, 00:23:53.381 "seek_hole": false, 00:23:53.381 "seek_data": false, 00:23:53.381 "copy": true, 00:23:53.381 "nvme_iov_md": false 00:23:53.381 }, 00:23:53.381 "memory_domains": [ 00:23:53.381 { 00:23:53.381 "dma_device_id": "system", 00:23:53.381 "dma_device_type": 1 00:23:53.381 }, 00:23:53.381 { 00:23:53.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.381 "dma_device_type": 2 00:23:53.381 } 00:23:53.381 ], 00:23:53.381 "driver_specific": {} 00:23:53.381 }' 00:23:53.381 00:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.381 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.641 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:53.641 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.641 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.641 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:53.641 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.641 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.641 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:53.641 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.900 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.900 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:53.900 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:53.900 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:53.900 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:54.158 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:54.158 "name": "BaseBdev3", 00:23:54.158 "aliases": [ 00:23:54.158 "20956bb9-4546-426c-850f-729c7276d73f" 00:23:54.158 ], 00:23:54.158 "product_name": "Malloc disk", 00:23:54.158 "block_size": 512, 00:23:54.158 "num_blocks": 65536, 00:23:54.158 "uuid": "20956bb9-4546-426c-850f-729c7276d73f", 00:23:54.158 "assigned_rate_limits": { 00:23:54.158 "rw_ios_per_sec": 0, 00:23:54.158 "rw_mbytes_per_sec": 0, 00:23:54.158 "r_mbytes_per_sec": 0, 00:23:54.158 "w_mbytes_per_sec": 0 00:23:54.158 }, 00:23:54.158 "claimed": true, 00:23:54.158 "claim_type": "exclusive_write", 00:23:54.158 "zoned": false, 00:23:54.158 "supported_io_types": { 00:23:54.158 "read": true, 00:23:54.158 "write": true, 00:23:54.158 "unmap": true, 00:23:54.158 "flush": true, 00:23:54.158 "reset": true, 00:23:54.158 "nvme_admin": false, 00:23:54.158 "nvme_io": false, 00:23:54.158 "nvme_io_md": false, 00:23:54.158 "write_zeroes": true, 00:23:54.158 "zcopy": true, 00:23:54.158 "get_zone_info": false, 00:23:54.158 "zone_management": false, 00:23:54.158 "zone_append": false, 00:23:54.158 "compare": false, 00:23:54.158 "compare_and_write": false, 00:23:54.158 "abort": true, 00:23:54.158 "seek_hole": false, 00:23:54.158 "seek_data": false, 00:23:54.158 "copy": true, 00:23:54.158 "nvme_iov_md": false 00:23:54.158 }, 00:23:54.158 "memory_domains": [ 00:23:54.158 { 00:23:54.158 "dma_device_id": "system", 00:23:54.158 "dma_device_type": 1 00:23:54.158 }, 00:23:54.158 { 00:23:54.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.158 "dma_device_type": 2 00:23:54.158 } 00:23:54.158 ], 00:23:54.158 "driver_specific": {} 00:23:54.158 }' 00:23:54.158 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:54.158 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:54.158 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:54.158 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.158 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.158 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:54.417 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.417 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.417 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:54.417 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:54.417 00:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:54.417 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:54.417 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:54.417 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:54.417 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:54.676 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:54.676 "name": "BaseBdev4", 00:23:54.676 "aliases": [ 00:23:54.676 "f3c77aa0-ec33-47bf-908f-f6250e84a8d8" 00:23:54.676 ], 00:23:54.676 "product_name": "Malloc disk", 00:23:54.676 "block_size": 512, 00:23:54.676 "num_blocks": 65536, 00:23:54.676 "uuid": "f3c77aa0-ec33-47bf-908f-f6250e84a8d8", 00:23:54.676 "assigned_rate_limits": { 00:23:54.676 "rw_ios_per_sec": 0, 00:23:54.676 "rw_mbytes_per_sec": 0, 00:23:54.676 "r_mbytes_per_sec": 0, 00:23:54.676 "w_mbytes_per_sec": 0 00:23:54.676 }, 00:23:54.676 "claimed": true, 00:23:54.676 "claim_type": "exclusive_write", 00:23:54.676 "zoned": false, 00:23:54.676 "supported_io_types": { 00:23:54.676 "read": true, 00:23:54.676 "write": true, 00:23:54.676 "unmap": true, 00:23:54.676 "flush": true, 00:23:54.676 "reset": true, 00:23:54.676 "nvme_admin": false, 00:23:54.676 "nvme_io": false, 00:23:54.676 "nvme_io_md": false, 00:23:54.676 "write_zeroes": true, 00:23:54.676 "zcopy": true, 00:23:54.676 "get_zone_info": false, 00:23:54.676 "zone_management": false, 00:23:54.676 "zone_append": false, 00:23:54.676 "compare": false, 00:23:54.676 "compare_and_write": false, 00:23:54.676 "abort": true, 00:23:54.676 "seek_hole": false, 00:23:54.676 "seek_data": false, 00:23:54.676 "copy": true, 00:23:54.676 "nvme_iov_md": false 00:23:54.676 }, 00:23:54.676 "memory_domains": [ 00:23:54.676 { 00:23:54.676 "dma_device_id": "system", 00:23:54.676 "dma_device_type": 1 00:23:54.676 }, 00:23:54.676 { 00:23:54.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.676 "dma_device_type": 2 00:23:54.676 } 00:23:54.676 ], 00:23:54.676 "driver_specific": {} 00:23:54.676 }' 00:23:54.676 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:54.677 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:54.935 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:54.935 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.935 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.935 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:54.935 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.935 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.935 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:54.935 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:55.194 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:55.194 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:55.194 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:55.194 [2024-07-25 00:50:17.801064] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:55.194 [2024-07-25 00:50:17.801094] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:55.194 [2024-07-25 00:50:17.801149] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:55.194 [2024-07-25 00:50:17.801209] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:55.194 [2024-07-25 00:50:17.801217] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:23:55.194 00:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 134939 00:23:55.194 00:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 134939 ']' 00:23:55.194 00:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 134939 00:23:55.194 00:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:23:55.194 00:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.194 00:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134939 00:23:55.453 00:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:55.453 00:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:55.453 00:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134939' 00:23:55.453 killing process with pid 134939 00:23:55.453 00:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 134939 00:23:55.453 [2024-07-25 00:50:17.848851] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:55.453 00:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 134939 00:23:55.710 [2024-07-25 00:50:18.242049] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:57.086 ************************************ 00:23:57.086 END TEST raid_state_function_test 00:23:57.086 ************************************ 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:23:57.086 00:23:57.086 real 0m32.737s 00:23:57.086 user 0m58.850s 00:23:57.086 sys 0m5.065s 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.086 00:50:19 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:23:57.086 00:50:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:57.086 00:50:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:57.086 00:50:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:57.086 ************************************ 00:23:57.086 START TEST raid_state_function_test_sb 00:23:57.086 ************************************ 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=136027 00:23:57.086 Process raid pid: 136027 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 136027' 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 136027 /var/tmp/spdk-raid.sock 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 136027 ']' 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.086 00:50:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.086 [2024-07-25 00:50:19.733891] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:23:57.086 [2024-07-25 00:50:19.734205] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.345 [2024-07-25 00:50:19.926812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.604 [2024-07-25 00:50:20.118160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.893 [2024-07-25 00:50:20.320867] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:58.153 00:50:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.153 00:50:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:23:58.153 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:58.411 [2024-07-25 00:50:20.879077] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:58.411 [2024-07-25 00:50:20.879164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:58.411 [2024-07-25 00:50:20.879174] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:58.411 [2024-07-25 00:50:20.879196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:58.411 [2024-07-25 00:50:20.879203] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:58.411 [2024-07-25 00:50:20.879218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:58.411 [2024-07-25 00:50:20.879225] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:58.411 [2024-07-25 00:50:20.879250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:58.411 00:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.674 00:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.674 "name": "Existed_Raid", 00:23:58.674 "uuid": "a585b162-0fb1-4f57-9578-ce70f341b320", 00:23:58.674 "strip_size_kb": 64, 00:23:58.674 "state": "configuring", 00:23:58.674 "raid_level": "raid0", 00:23:58.674 "superblock": true, 00:23:58.674 "num_base_bdevs": 4, 00:23:58.674 "num_base_bdevs_discovered": 0, 00:23:58.674 "num_base_bdevs_operational": 4, 00:23:58.674 "base_bdevs_list": [ 00:23:58.674 { 00:23:58.674 "name": "BaseBdev1", 00:23:58.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.674 "is_configured": false, 00:23:58.674 "data_offset": 0, 00:23:58.674 "data_size": 0 00:23:58.674 }, 00:23:58.674 { 00:23:58.674 "name": "BaseBdev2", 00:23:58.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.674 "is_configured": false, 00:23:58.674 "data_offset": 0, 00:23:58.674 "data_size": 0 00:23:58.674 }, 00:23:58.674 { 00:23:58.674 "name": "BaseBdev3", 00:23:58.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.674 "is_configured": false, 00:23:58.674 "data_offset": 0, 00:23:58.674 "data_size": 0 00:23:58.674 }, 00:23:58.674 { 00:23:58.674 "name": "BaseBdev4", 00:23:58.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.674 "is_configured": false, 00:23:58.674 "data_offset": 0, 00:23:58.674 "data_size": 0 00:23:58.674 } 00:23:58.674 ] 00:23:58.674 }' 00:23:58.674 00:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.674 00:50:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.242 00:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:59.502 [2024-07-25 00:50:21.947485] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:59.502 [2024-07-25 00:50:21.947521] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:59.503 00:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:59.503 [2024-07-25 00:50:22.119509] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:59.503 [2024-07-25 00:50:22.119562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:59.503 [2024-07-25 00:50:22.119571] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:59.503 [2024-07-25 00:50:22.119614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:59.503 [2024-07-25 00:50:22.119623] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:59.503 [2024-07-25 00:50:22.119653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:59.503 [2024-07-25 00:50:22.119661] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:59.503 [2024-07-25 00:50:22.119686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:59.503 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:59.762 [2024-07-25 00:50:22.327950] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:59.762 BaseBdev1 00:23:59.762 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:59.762 00:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:59.762 00:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:59.762 00:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:59.762 00:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:59.762 00:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:59.762 00:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:00.022 00:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:00.283 [ 00:24:00.283 { 00:24:00.283 "name": "BaseBdev1", 00:24:00.283 "aliases": [ 00:24:00.283 "3988fbc0-c7c6-4d08-b8ae-90e3cfba71c1" 00:24:00.283 ], 00:24:00.283 "product_name": "Malloc disk", 00:24:00.283 "block_size": 512, 00:24:00.283 "num_blocks": 65536, 00:24:00.283 "uuid": "3988fbc0-c7c6-4d08-b8ae-90e3cfba71c1", 00:24:00.283 "assigned_rate_limits": { 00:24:00.283 "rw_ios_per_sec": 0, 00:24:00.283 "rw_mbytes_per_sec": 0, 00:24:00.283 "r_mbytes_per_sec": 0, 00:24:00.283 "w_mbytes_per_sec": 0 00:24:00.283 }, 00:24:00.283 "claimed": true, 00:24:00.283 "claim_type": "exclusive_write", 00:24:00.283 "zoned": false, 00:24:00.283 "supported_io_types": { 00:24:00.283 "read": true, 00:24:00.283 "write": true, 00:24:00.283 "unmap": true, 00:24:00.283 "flush": true, 00:24:00.283 "reset": true, 00:24:00.283 "nvme_admin": false, 00:24:00.283 "nvme_io": false, 00:24:00.283 "nvme_io_md": false, 00:24:00.283 "write_zeroes": true, 00:24:00.283 "zcopy": true, 00:24:00.283 "get_zone_info": false, 00:24:00.283 "zone_management": false, 00:24:00.283 "zone_append": false, 00:24:00.283 "compare": false, 00:24:00.283 "compare_and_write": false, 00:24:00.283 "abort": true, 00:24:00.283 "seek_hole": false, 00:24:00.283 "seek_data": false, 00:24:00.283 "copy": true, 00:24:00.283 "nvme_iov_md": false 00:24:00.283 }, 00:24:00.283 "memory_domains": [ 00:24:00.283 { 00:24:00.283 "dma_device_id": "system", 00:24:00.283 "dma_device_type": 1 00:24:00.283 }, 00:24:00.283 { 00:24:00.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.283 "dma_device_type": 2 00:24:00.283 } 00:24:00.283 ], 00:24:00.283 "driver_specific": {} 00:24:00.283 } 00:24:00.283 ] 00:24:00.283 00:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:00.283 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:00.283 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:00.283 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:00.283 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:00.283 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:00.283 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:00.283 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:00.283 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:00.284 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:00.284 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:00.284 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.284 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:00.544 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:00.544 "name": "Existed_Raid", 00:24:00.544 "uuid": "250479cc-82a3-4620-abd1-c861d236c4d4", 00:24:00.544 "strip_size_kb": 64, 00:24:00.544 "state": "configuring", 00:24:00.544 "raid_level": "raid0", 00:24:00.544 "superblock": true, 00:24:00.544 "num_base_bdevs": 4, 00:24:00.544 "num_base_bdevs_discovered": 1, 00:24:00.544 "num_base_bdevs_operational": 4, 00:24:00.544 "base_bdevs_list": [ 00:24:00.544 { 00:24:00.544 "name": "BaseBdev1", 00:24:00.544 "uuid": "3988fbc0-c7c6-4d08-b8ae-90e3cfba71c1", 00:24:00.544 "is_configured": true, 00:24:00.544 "data_offset": 2048, 00:24:00.544 "data_size": 63488 00:24:00.544 }, 00:24:00.544 { 00:24:00.544 "name": "BaseBdev2", 00:24:00.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.544 "is_configured": false, 00:24:00.544 "data_offset": 0, 00:24:00.544 "data_size": 0 00:24:00.544 }, 00:24:00.544 { 00:24:00.544 "name": "BaseBdev3", 00:24:00.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.544 "is_configured": false, 00:24:00.544 "data_offset": 0, 00:24:00.544 "data_size": 0 00:24:00.544 }, 00:24:00.544 { 00:24:00.544 "name": "BaseBdev4", 00:24:00.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.544 "is_configured": false, 00:24:00.544 "data_offset": 0, 00:24:00.544 "data_size": 0 00:24:00.544 } 00:24:00.544 ] 00:24:00.544 }' 00:24:00.544 00:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:00.544 00:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.115 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:01.115 [2024-07-25 00:50:23.708181] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:01.115 [2024-07-25 00:50:23.708231] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:01.115 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:01.375 [2024-07-25 00:50:23.968271] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:01.375 [2024-07-25 00:50:23.970140] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:01.375 [2024-07-25 00:50:23.970201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:01.375 [2024-07-25 00:50:23.970212] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:01.375 [2024-07-25 00:50:23.970249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:01.375 [2024-07-25 00:50:23.970275] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:01.375 [2024-07-25 00:50:23.970294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.375 00:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:01.634 00:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:01.634 "name": "Existed_Raid", 00:24:01.634 "uuid": "df1c7c88-97d7-4db6-9763-a6e6dcf5eddc", 00:24:01.634 "strip_size_kb": 64, 00:24:01.634 "state": "configuring", 00:24:01.634 "raid_level": "raid0", 00:24:01.634 "superblock": true, 00:24:01.634 "num_base_bdevs": 4, 00:24:01.634 "num_base_bdevs_discovered": 1, 00:24:01.634 "num_base_bdevs_operational": 4, 00:24:01.634 "base_bdevs_list": [ 00:24:01.634 { 00:24:01.634 "name": "BaseBdev1", 00:24:01.634 "uuid": "3988fbc0-c7c6-4d08-b8ae-90e3cfba71c1", 00:24:01.634 "is_configured": true, 00:24:01.634 "data_offset": 2048, 00:24:01.634 "data_size": 63488 00:24:01.634 }, 00:24:01.634 { 00:24:01.634 "name": "BaseBdev2", 00:24:01.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.634 "is_configured": false, 00:24:01.634 "data_offset": 0, 00:24:01.634 "data_size": 0 00:24:01.634 }, 00:24:01.634 { 00:24:01.634 "name": "BaseBdev3", 00:24:01.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.634 "is_configured": false, 00:24:01.634 "data_offset": 0, 00:24:01.635 "data_size": 0 00:24:01.635 }, 00:24:01.635 { 00:24:01.635 "name": "BaseBdev4", 00:24:01.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.635 "is_configured": false, 00:24:01.635 "data_offset": 0, 00:24:01.635 "data_size": 0 00:24:01.635 } 00:24:01.635 ] 00:24:01.635 }' 00:24:01.635 00:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:01.635 00:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:02.203 00:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:02.462 [2024-07-25 00:50:24.946809] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:02.462 BaseBdev2 00:24:02.462 00:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:02.462 00:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:02.462 00:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:02.462 00:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:02.462 00:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:02.462 00:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:02.462 00:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:02.722 00:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:02.722 [ 00:24:02.722 { 00:24:02.722 "name": "BaseBdev2", 00:24:02.722 "aliases": [ 00:24:02.722 "d8b2f645-ebd6-490b-aef8-f860cc045fb7" 00:24:02.722 ], 00:24:02.722 "product_name": "Malloc disk", 00:24:02.722 "block_size": 512, 00:24:02.722 "num_blocks": 65536, 00:24:02.722 "uuid": "d8b2f645-ebd6-490b-aef8-f860cc045fb7", 00:24:02.722 "assigned_rate_limits": { 00:24:02.722 "rw_ios_per_sec": 0, 00:24:02.722 "rw_mbytes_per_sec": 0, 00:24:02.722 "r_mbytes_per_sec": 0, 00:24:02.722 "w_mbytes_per_sec": 0 00:24:02.722 }, 00:24:02.722 "claimed": true, 00:24:02.722 "claim_type": "exclusive_write", 00:24:02.722 "zoned": false, 00:24:02.722 "supported_io_types": { 00:24:02.722 "read": true, 00:24:02.722 "write": true, 00:24:02.722 "unmap": true, 00:24:02.722 "flush": true, 00:24:02.722 "reset": true, 00:24:02.722 "nvme_admin": false, 00:24:02.722 "nvme_io": false, 00:24:02.722 "nvme_io_md": false, 00:24:02.722 "write_zeroes": true, 00:24:02.722 "zcopy": true, 00:24:02.722 "get_zone_info": false, 00:24:02.722 "zone_management": false, 00:24:02.722 "zone_append": false, 00:24:02.722 "compare": false, 00:24:02.722 "compare_and_write": false, 00:24:02.722 "abort": true, 00:24:02.722 "seek_hole": false, 00:24:02.722 "seek_data": false, 00:24:02.722 "copy": true, 00:24:02.722 "nvme_iov_md": false 00:24:02.722 }, 00:24:02.722 "memory_domains": [ 00:24:02.722 { 00:24:02.722 "dma_device_id": "system", 00:24:02.722 "dma_device_type": 1 00:24:02.722 }, 00:24:02.722 { 00:24:02.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.722 "dma_device_type": 2 00:24:02.722 } 00:24:02.722 ], 00:24:02.722 "driver_specific": {} 00:24:02.722 } 00:24:02.722 ] 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:02.982 "name": "Existed_Raid", 00:24:02.982 "uuid": "df1c7c88-97d7-4db6-9763-a6e6dcf5eddc", 00:24:02.982 "strip_size_kb": 64, 00:24:02.982 "state": "configuring", 00:24:02.982 "raid_level": "raid0", 00:24:02.982 "superblock": true, 00:24:02.982 "num_base_bdevs": 4, 00:24:02.982 "num_base_bdevs_discovered": 2, 00:24:02.982 "num_base_bdevs_operational": 4, 00:24:02.982 "base_bdevs_list": [ 00:24:02.982 { 00:24:02.982 "name": "BaseBdev1", 00:24:02.982 "uuid": "3988fbc0-c7c6-4d08-b8ae-90e3cfba71c1", 00:24:02.982 "is_configured": true, 00:24:02.982 "data_offset": 2048, 00:24:02.982 "data_size": 63488 00:24:02.982 }, 00:24:02.982 { 00:24:02.982 "name": "BaseBdev2", 00:24:02.982 "uuid": "d8b2f645-ebd6-490b-aef8-f860cc045fb7", 00:24:02.982 "is_configured": true, 00:24:02.982 "data_offset": 2048, 00:24:02.982 "data_size": 63488 00:24:02.982 }, 00:24:02.982 { 00:24:02.982 "name": "BaseBdev3", 00:24:02.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.982 "is_configured": false, 00:24:02.982 "data_offset": 0, 00:24:02.982 "data_size": 0 00:24:02.982 }, 00:24:02.982 { 00:24:02.982 "name": "BaseBdev4", 00:24:02.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.982 "is_configured": false, 00:24:02.982 "data_offset": 0, 00:24:02.982 "data_size": 0 00:24:02.982 } 00:24:02.982 ] 00:24:02.982 }' 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:02.982 00:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.551 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:03.810 [2024-07-25 00:50:26.352147] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:03.810 BaseBdev3 00:24:03.810 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:03.810 00:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:03.810 00:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:03.810 00:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:03.810 00:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:03.810 00:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:03.810 00:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:04.069 00:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:04.069 [ 00:24:04.069 { 00:24:04.069 "name": "BaseBdev3", 00:24:04.069 "aliases": [ 00:24:04.069 "50ddcbf4-378b-46a9-bca7-66b11f211495" 00:24:04.069 ], 00:24:04.069 "product_name": "Malloc disk", 00:24:04.069 "block_size": 512, 00:24:04.069 "num_blocks": 65536, 00:24:04.069 "uuid": "50ddcbf4-378b-46a9-bca7-66b11f211495", 00:24:04.069 "assigned_rate_limits": { 00:24:04.069 "rw_ios_per_sec": 0, 00:24:04.069 "rw_mbytes_per_sec": 0, 00:24:04.069 "r_mbytes_per_sec": 0, 00:24:04.069 "w_mbytes_per_sec": 0 00:24:04.069 }, 00:24:04.069 "claimed": true, 00:24:04.069 "claim_type": "exclusive_write", 00:24:04.069 "zoned": false, 00:24:04.069 "supported_io_types": { 00:24:04.069 "read": true, 00:24:04.069 "write": true, 00:24:04.069 "unmap": true, 00:24:04.069 "flush": true, 00:24:04.069 "reset": true, 00:24:04.069 "nvme_admin": false, 00:24:04.069 "nvme_io": false, 00:24:04.069 "nvme_io_md": false, 00:24:04.069 "write_zeroes": true, 00:24:04.069 "zcopy": true, 00:24:04.069 "get_zone_info": false, 00:24:04.069 "zone_management": false, 00:24:04.069 "zone_append": false, 00:24:04.069 "compare": false, 00:24:04.069 "compare_and_write": false, 00:24:04.069 "abort": true, 00:24:04.069 "seek_hole": false, 00:24:04.069 "seek_data": false, 00:24:04.069 "copy": true, 00:24:04.069 "nvme_iov_md": false 00:24:04.069 }, 00:24:04.069 "memory_domains": [ 00:24:04.070 { 00:24:04.070 "dma_device_id": "system", 00:24:04.070 "dma_device_type": 1 00:24:04.070 }, 00:24:04.070 { 00:24:04.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:04.070 "dma_device_type": 2 00:24:04.070 } 00:24:04.070 ], 00:24:04.070 "driver_specific": {} 00:24:04.070 } 00:24:04.070 ] 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.329 "name": "Existed_Raid", 00:24:04.329 "uuid": "df1c7c88-97d7-4db6-9763-a6e6dcf5eddc", 00:24:04.329 "strip_size_kb": 64, 00:24:04.329 "state": "configuring", 00:24:04.329 "raid_level": "raid0", 00:24:04.329 "superblock": true, 00:24:04.329 "num_base_bdevs": 4, 00:24:04.329 "num_base_bdevs_discovered": 3, 00:24:04.329 "num_base_bdevs_operational": 4, 00:24:04.329 "base_bdevs_list": [ 00:24:04.329 { 00:24:04.329 "name": "BaseBdev1", 00:24:04.329 "uuid": "3988fbc0-c7c6-4d08-b8ae-90e3cfba71c1", 00:24:04.329 "is_configured": true, 00:24:04.329 "data_offset": 2048, 00:24:04.329 "data_size": 63488 00:24:04.329 }, 00:24:04.329 { 00:24:04.329 "name": "BaseBdev2", 00:24:04.329 "uuid": "d8b2f645-ebd6-490b-aef8-f860cc045fb7", 00:24:04.329 "is_configured": true, 00:24:04.329 "data_offset": 2048, 00:24:04.329 "data_size": 63488 00:24:04.329 }, 00:24:04.329 { 00:24:04.329 "name": "BaseBdev3", 00:24:04.329 "uuid": "50ddcbf4-378b-46a9-bca7-66b11f211495", 00:24:04.329 "is_configured": true, 00:24:04.329 "data_offset": 2048, 00:24:04.329 "data_size": 63488 00:24:04.329 }, 00:24:04.329 { 00:24:04.329 "name": "BaseBdev4", 00:24:04.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.329 "is_configured": false, 00:24:04.329 "data_offset": 0, 00:24:04.329 "data_size": 0 00:24:04.329 } 00:24:04.329 ] 00:24:04.329 }' 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.329 00:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.894 00:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:05.154 [2024-07-25 00:50:27.687178] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:05.154 [2024-07-25 00:50:27.687444] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:24:05.154 [2024-07-25 00:50:27.687462] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:05.154 [2024-07-25 00:50:27.687621] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:05.154 [2024-07-25 00:50:27.687946] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:24:05.154 [2024-07-25 00:50:27.687967] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:24:05.154 [2024-07-25 00:50:27.688123] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.154 BaseBdev4 00:24:05.154 00:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:05.154 00:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:05.154 00:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:05.154 00:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:05.154 00:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:05.154 00:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:05.154 00:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:05.413 00:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:05.673 [ 00:24:05.673 { 00:24:05.673 "name": "BaseBdev4", 00:24:05.673 "aliases": [ 00:24:05.673 "d871b4c1-e580-4947-93e5-150bec7c8b0b" 00:24:05.673 ], 00:24:05.673 "product_name": "Malloc disk", 00:24:05.673 "block_size": 512, 00:24:05.673 "num_blocks": 65536, 00:24:05.673 "uuid": "d871b4c1-e580-4947-93e5-150bec7c8b0b", 00:24:05.673 "assigned_rate_limits": { 00:24:05.673 "rw_ios_per_sec": 0, 00:24:05.673 "rw_mbytes_per_sec": 0, 00:24:05.673 "r_mbytes_per_sec": 0, 00:24:05.673 "w_mbytes_per_sec": 0 00:24:05.673 }, 00:24:05.673 "claimed": true, 00:24:05.673 "claim_type": "exclusive_write", 00:24:05.673 "zoned": false, 00:24:05.673 "supported_io_types": { 00:24:05.673 "read": true, 00:24:05.673 "write": true, 00:24:05.673 "unmap": true, 00:24:05.673 "flush": true, 00:24:05.673 "reset": true, 00:24:05.673 "nvme_admin": false, 00:24:05.673 "nvme_io": false, 00:24:05.673 "nvme_io_md": false, 00:24:05.673 "write_zeroes": true, 00:24:05.673 "zcopy": true, 00:24:05.673 "get_zone_info": false, 00:24:05.673 "zone_management": false, 00:24:05.673 "zone_append": false, 00:24:05.673 "compare": false, 00:24:05.673 "compare_and_write": false, 00:24:05.673 "abort": true, 00:24:05.673 "seek_hole": false, 00:24:05.673 "seek_data": false, 00:24:05.673 "copy": true, 00:24:05.673 "nvme_iov_md": false 00:24:05.673 }, 00:24:05.673 "memory_domains": [ 00:24:05.673 { 00:24:05.673 "dma_device_id": "system", 00:24:05.673 "dma_device_type": 1 00:24:05.673 }, 00:24:05.673 { 00:24:05.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.673 "dma_device_type": 2 00:24:05.673 } 00:24:05.673 ], 00:24:05.673 "driver_specific": {} 00:24:05.673 } 00:24:05.673 ] 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.673 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:05.932 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.932 "name": "Existed_Raid", 00:24:05.932 "uuid": "df1c7c88-97d7-4db6-9763-a6e6dcf5eddc", 00:24:05.932 "strip_size_kb": 64, 00:24:05.932 "state": "online", 00:24:05.932 "raid_level": "raid0", 00:24:05.932 "superblock": true, 00:24:05.932 "num_base_bdevs": 4, 00:24:05.932 "num_base_bdevs_discovered": 4, 00:24:05.932 "num_base_bdevs_operational": 4, 00:24:05.932 "base_bdevs_list": [ 00:24:05.932 { 00:24:05.932 "name": "BaseBdev1", 00:24:05.932 "uuid": "3988fbc0-c7c6-4d08-b8ae-90e3cfba71c1", 00:24:05.932 "is_configured": true, 00:24:05.932 "data_offset": 2048, 00:24:05.932 "data_size": 63488 00:24:05.932 }, 00:24:05.932 { 00:24:05.932 "name": "BaseBdev2", 00:24:05.932 "uuid": "d8b2f645-ebd6-490b-aef8-f860cc045fb7", 00:24:05.932 "is_configured": true, 00:24:05.932 "data_offset": 2048, 00:24:05.932 "data_size": 63488 00:24:05.932 }, 00:24:05.932 { 00:24:05.932 "name": "BaseBdev3", 00:24:05.932 "uuid": "50ddcbf4-378b-46a9-bca7-66b11f211495", 00:24:05.932 "is_configured": true, 00:24:05.932 "data_offset": 2048, 00:24:05.932 "data_size": 63488 00:24:05.932 }, 00:24:05.932 { 00:24:05.932 "name": "BaseBdev4", 00:24:05.932 "uuid": "d871b4c1-e580-4947-93e5-150bec7c8b0b", 00:24:05.932 "is_configured": true, 00:24:05.932 "data_offset": 2048, 00:24:05.932 "data_size": 63488 00:24:05.932 } 00:24:05.932 ] 00:24:05.932 }' 00:24:05.932 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.932 00:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.501 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:06.501 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:06.501 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:06.501 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:06.501 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:06.501 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:06.501 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:06.501 00:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:06.760 [2024-07-25 00:50:29.199780] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:06.760 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:06.760 "name": "Existed_Raid", 00:24:06.760 "aliases": [ 00:24:06.760 "df1c7c88-97d7-4db6-9763-a6e6dcf5eddc" 00:24:06.760 ], 00:24:06.760 "product_name": "Raid Volume", 00:24:06.760 "block_size": 512, 00:24:06.760 "num_blocks": 253952, 00:24:06.760 "uuid": "df1c7c88-97d7-4db6-9763-a6e6dcf5eddc", 00:24:06.760 "assigned_rate_limits": { 00:24:06.760 "rw_ios_per_sec": 0, 00:24:06.761 "rw_mbytes_per_sec": 0, 00:24:06.761 "r_mbytes_per_sec": 0, 00:24:06.761 "w_mbytes_per_sec": 0 00:24:06.761 }, 00:24:06.761 "claimed": false, 00:24:06.761 "zoned": false, 00:24:06.761 "supported_io_types": { 00:24:06.761 "read": true, 00:24:06.761 "write": true, 00:24:06.761 "unmap": true, 00:24:06.761 "flush": true, 00:24:06.761 "reset": true, 00:24:06.761 "nvme_admin": false, 00:24:06.761 "nvme_io": false, 00:24:06.761 "nvme_io_md": false, 00:24:06.761 "write_zeroes": true, 00:24:06.761 "zcopy": false, 00:24:06.761 "get_zone_info": false, 00:24:06.761 "zone_management": false, 00:24:06.761 "zone_append": false, 00:24:06.761 "compare": false, 00:24:06.761 "compare_and_write": false, 00:24:06.761 "abort": false, 00:24:06.761 "seek_hole": false, 00:24:06.761 "seek_data": false, 00:24:06.761 "copy": false, 00:24:06.761 "nvme_iov_md": false 00:24:06.761 }, 00:24:06.761 "memory_domains": [ 00:24:06.761 { 00:24:06.761 "dma_device_id": "system", 00:24:06.761 "dma_device_type": 1 00:24:06.761 }, 00:24:06.761 { 00:24:06.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.761 "dma_device_type": 2 00:24:06.761 }, 00:24:06.761 { 00:24:06.761 "dma_device_id": "system", 00:24:06.761 "dma_device_type": 1 00:24:06.761 }, 00:24:06.761 { 00:24:06.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.761 "dma_device_type": 2 00:24:06.761 }, 00:24:06.761 { 00:24:06.761 "dma_device_id": "system", 00:24:06.761 "dma_device_type": 1 00:24:06.761 }, 00:24:06.761 { 00:24:06.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.761 "dma_device_type": 2 00:24:06.761 }, 00:24:06.761 { 00:24:06.761 "dma_device_id": "system", 00:24:06.761 "dma_device_type": 1 00:24:06.761 }, 00:24:06.761 { 00:24:06.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.761 "dma_device_type": 2 00:24:06.761 } 00:24:06.761 ], 00:24:06.761 "driver_specific": { 00:24:06.761 "raid": { 00:24:06.761 "uuid": "df1c7c88-97d7-4db6-9763-a6e6dcf5eddc", 00:24:06.761 "strip_size_kb": 64, 00:24:06.761 "state": "online", 00:24:06.761 "raid_level": "raid0", 00:24:06.761 "superblock": true, 00:24:06.761 "num_base_bdevs": 4, 00:24:06.761 "num_base_bdevs_discovered": 4, 00:24:06.761 "num_base_bdevs_operational": 4, 00:24:06.761 "base_bdevs_list": [ 00:24:06.761 { 00:24:06.761 "name": "BaseBdev1", 00:24:06.761 "uuid": "3988fbc0-c7c6-4d08-b8ae-90e3cfba71c1", 00:24:06.761 "is_configured": true, 00:24:06.761 "data_offset": 2048, 00:24:06.761 "data_size": 63488 00:24:06.761 }, 00:24:06.761 { 00:24:06.761 "name": "BaseBdev2", 00:24:06.761 "uuid": "d8b2f645-ebd6-490b-aef8-f860cc045fb7", 00:24:06.761 "is_configured": true, 00:24:06.761 "data_offset": 2048, 00:24:06.761 "data_size": 63488 00:24:06.761 }, 00:24:06.761 { 00:24:06.761 "name": "BaseBdev3", 00:24:06.761 "uuid": "50ddcbf4-378b-46a9-bca7-66b11f211495", 00:24:06.761 "is_configured": true, 00:24:06.761 "data_offset": 2048, 00:24:06.761 "data_size": 63488 00:24:06.761 }, 00:24:06.761 { 00:24:06.761 "name": "BaseBdev4", 00:24:06.761 "uuid": "d871b4c1-e580-4947-93e5-150bec7c8b0b", 00:24:06.761 "is_configured": true, 00:24:06.761 "data_offset": 2048, 00:24:06.761 "data_size": 63488 00:24:06.761 } 00:24:06.761 ] 00:24:06.761 } 00:24:06.761 } 00:24:06.761 }' 00:24:06.761 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:06.761 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:06.761 BaseBdev2 00:24:06.761 BaseBdev3 00:24:06.761 BaseBdev4' 00:24:06.761 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:06.761 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:06.761 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:07.020 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:07.020 "name": "BaseBdev1", 00:24:07.020 "aliases": [ 00:24:07.020 "3988fbc0-c7c6-4d08-b8ae-90e3cfba71c1" 00:24:07.020 ], 00:24:07.020 "product_name": "Malloc disk", 00:24:07.020 "block_size": 512, 00:24:07.020 "num_blocks": 65536, 00:24:07.020 "uuid": "3988fbc0-c7c6-4d08-b8ae-90e3cfba71c1", 00:24:07.020 "assigned_rate_limits": { 00:24:07.020 "rw_ios_per_sec": 0, 00:24:07.020 "rw_mbytes_per_sec": 0, 00:24:07.020 "r_mbytes_per_sec": 0, 00:24:07.020 "w_mbytes_per_sec": 0 00:24:07.020 }, 00:24:07.020 "claimed": true, 00:24:07.021 "claim_type": "exclusive_write", 00:24:07.021 "zoned": false, 00:24:07.021 "supported_io_types": { 00:24:07.021 "read": true, 00:24:07.021 "write": true, 00:24:07.021 "unmap": true, 00:24:07.021 "flush": true, 00:24:07.021 "reset": true, 00:24:07.021 "nvme_admin": false, 00:24:07.021 "nvme_io": false, 00:24:07.021 "nvme_io_md": false, 00:24:07.021 "write_zeroes": true, 00:24:07.021 "zcopy": true, 00:24:07.021 "get_zone_info": false, 00:24:07.021 "zone_management": false, 00:24:07.021 "zone_append": false, 00:24:07.021 "compare": false, 00:24:07.021 "compare_and_write": false, 00:24:07.021 "abort": true, 00:24:07.021 "seek_hole": false, 00:24:07.021 "seek_data": false, 00:24:07.021 "copy": true, 00:24:07.021 "nvme_iov_md": false 00:24:07.021 }, 00:24:07.021 "memory_domains": [ 00:24:07.021 { 00:24:07.021 "dma_device_id": "system", 00:24:07.021 "dma_device_type": 1 00:24:07.021 }, 00:24:07.021 { 00:24:07.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.021 "dma_device_type": 2 00:24:07.021 } 00:24:07.021 ], 00:24:07.021 "driver_specific": {} 00:24:07.021 }' 00:24:07.021 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.021 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.021 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:07.021 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.021 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.021 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:07.021 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.021 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.280 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:07.280 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:07.280 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:07.280 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:07.280 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:07.280 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:07.280 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:07.539 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:07.539 "name": "BaseBdev2", 00:24:07.539 "aliases": [ 00:24:07.539 "d8b2f645-ebd6-490b-aef8-f860cc045fb7" 00:24:07.539 ], 00:24:07.539 "product_name": "Malloc disk", 00:24:07.539 "block_size": 512, 00:24:07.539 "num_blocks": 65536, 00:24:07.539 "uuid": "d8b2f645-ebd6-490b-aef8-f860cc045fb7", 00:24:07.539 "assigned_rate_limits": { 00:24:07.539 "rw_ios_per_sec": 0, 00:24:07.539 "rw_mbytes_per_sec": 0, 00:24:07.539 "r_mbytes_per_sec": 0, 00:24:07.539 "w_mbytes_per_sec": 0 00:24:07.539 }, 00:24:07.539 "claimed": true, 00:24:07.539 "claim_type": "exclusive_write", 00:24:07.539 "zoned": false, 00:24:07.539 "supported_io_types": { 00:24:07.539 "read": true, 00:24:07.539 "write": true, 00:24:07.539 "unmap": true, 00:24:07.539 "flush": true, 00:24:07.539 "reset": true, 00:24:07.539 "nvme_admin": false, 00:24:07.539 "nvme_io": false, 00:24:07.539 "nvme_io_md": false, 00:24:07.539 "write_zeroes": true, 00:24:07.539 "zcopy": true, 00:24:07.539 "get_zone_info": false, 00:24:07.539 "zone_management": false, 00:24:07.539 "zone_append": false, 00:24:07.539 "compare": false, 00:24:07.539 "compare_and_write": false, 00:24:07.539 "abort": true, 00:24:07.539 "seek_hole": false, 00:24:07.539 "seek_data": false, 00:24:07.539 "copy": true, 00:24:07.539 "nvme_iov_md": false 00:24:07.539 }, 00:24:07.539 "memory_domains": [ 00:24:07.539 { 00:24:07.539 "dma_device_id": "system", 00:24:07.539 "dma_device_type": 1 00:24:07.539 }, 00:24:07.539 { 00:24:07.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.539 "dma_device_type": 2 00:24:07.539 } 00:24:07.539 ], 00:24:07.539 "driver_specific": {} 00:24:07.539 }' 00:24:07.539 00:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.539 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.539 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:07.539 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.539 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.539 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:07.539 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.539 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.799 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:07.799 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:07.799 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:07.799 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:07.799 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:07.799 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:07.799 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:08.058 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:08.058 "name": "BaseBdev3", 00:24:08.058 "aliases": [ 00:24:08.058 "50ddcbf4-378b-46a9-bca7-66b11f211495" 00:24:08.058 ], 00:24:08.058 "product_name": "Malloc disk", 00:24:08.058 "block_size": 512, 00:24:08.058 "num_blocks": 65536, 00:24:08.058 "uuid": "50ddcbf4-378b-46a9-bca7-66b11f211495", 00:24:08.058 "assigned_rate_limits": { 00:24:08.058 "rw_ios_per_sec": 0, 00:24:08.058 "rw_mbytes_per_sec": 0, 00:24:08.058 "r_mbytes_per_sec": 0, 00:24:08.058 "w_mbytes_per_sec": 0 00:24:08.058 }, 00:24:08.058 "claimed": true, 00:24:08.058 "claim_type": "exclusive_write", 00:24:08.058 "zoned": false, 00:24:08.058 "supported_io_types": { 00:24:08.058 "read": true, 00:24:08.058 "write": true, 00:24:08.058 "unmap": true, 00:24:08.058 "flush": true, 00:24:08.058 "reset": true, 00:24:08.058 "nvme_admin": false, 00:24:08.058 "nvme_io": false, 00:24:08.058 "nvme_io_md": false, 00:24:08.058 "write_zeroes": true, 00:24:08.058 "zcopy": true, 00:24:08.058 "get_zone_info": false, 00:24:08.058 "zone_management": false, 00:24:08.058 "zone_append": false, 00:24:08.058 "compare": false, 00:24:08.059 "compare_and_write": false, 00:24:08.059 "abort": true, 00:24:08.059 "seek_hole": false, 00:24:08.059 "seek_data": false, 00:24:08.059 "copy": true, 00:24:08.059 "nvme_iov_md": false 00:24:08.059 }, 00:24:08.059 "memory_domains": [ 00:24:08.059 { 00:24:08.059 "dma_device_id": "system", 00:24:08.059 "dma_device_type": 1 00:24:08.059 }, 00:24:08.059 { 00:24:08.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.059 "dma_device_type": 2 00:24:08.059 } 00:24:08.059 ], 00:24:08.059 "driver_specific": {} 00:24:08.059 }' 00:24:08.059 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:08.059 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:08.059 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:08.059 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:08.319 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:08.319 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:08.319 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:08.319 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:08.319 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:08.319 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:08.319 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:08.319 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:08.319 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:08.579 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:08.579 00:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:08.579 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:08.579 "name": "BaseBdev4", 00:24:08.579 "aliases": [ 00:24:08.579 "d871b4c1-e580-4947-93e5-150bec7c8b0b" 00:24:08.579 ], 00:24:08.579 "product_name": "Malloc disk", 00:24:08.579 "block_size": 512, 00:24:08.579 "num_blocks": 65536, 00:24:08.579 "uuid": "d871b4c1-e580-4947-93e5-150bec7c8b0b", 00:24:08.579 "assigned_rate_limits": { 00:24:08.579 "rw_ios_per_sec": 0, 00:24:08.579 "rw_mbytes_per_sec": 0, 00:24:08.579 "r_mbytes_per_sec": 0, 00:24:08.579 "w_mbytes_per_sec": 0 00:24:08.579 }, 00:24:08.579 "claimed": true, 00:24:08.579 "claim_type": "exclusive_write", 00:24:08.579 "zoned": false, 00:24:08.579 "supported_io_types": { 00:24:08.579 "read": true, 00:24:08.579 "write": true, 00:24:08.579 "unmap": true, 00:24:08.579 "flush": true, 00:24:08.579 "reset": true, 00:24:08.579 "nvme_admin": false, 00:24:08.579 "nvme_io": false, 00:24:08.579 "nvme_io_md": false, 00:24:08.579 "write_zeroes": true, 00:24:08.579 "zcopy": true, 00:24:08.579 "get_zone_info": false, 00:24:08.579 "zone_management": false, 00:24:08.579 "zone_append": false, 00:24:08.579 "compare": false, 00:24:08.579 "compare_and_write": false, 00:24:08.579 "abort": true, 00:24:08.579 "seek_hole": false, 00:24:08.579 "seek_data": false, 00:24:08.579 "copy": true, 00:24:08.579 "nvme_iov_md": false 00:24:08.579 }, 00:24:08.579 "memory_domains": [ 00:24:08.579 { 00:24:08.579 "dma_device_id": "system", 00:24:08.579 "dma_device_type": 1 00:24:08.579 }, 00:24:08.579 { 00:24:08.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.579 "dma_device_type": 2 00:24:08.579 } 00:24:08.579 ], 00:24:08.579 "driver_specific": {} 00:24:08.579 }' 00:24:08.579 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:08.839 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:08.839 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:08.839 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:08.839 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:08.839 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:08.839 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:08.839 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:09.099 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:09.099 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.099 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.099 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:09.099 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:09.359 [2024-07-25 00:50:31.868121] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:09.359 [2024-07-25 00:50:31.868156] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:09.359 [2024-07-25 00:50:31.868211] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.359 00:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.619 00:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.619 "name": "Existed_Raid", 00:24:09.619 "uuid": "df1c7c88-97d7-4db6-9763-a6e6dcf5eddc", 00:24:09.619 "strip_size_kb": 64, 00:24:09.619 "state": "offline", 00:24:09.619 "raid_level": "raid0", 00:24:09.619 "superblock": true, 00:24:09.619 "num_base_bdevs": 4, 00:24:09.619 "num_base_bdevs_discovered": 3, 00:24:09.619 "num_base_bdevs_operational": 3, 00:24:09.619 "base_bdevs_list": [ 00:24:09.619 { 00:24:09.619 "name": null, 00:24:09.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.619 "is_configured": false, 00:24:09.619 "data_offset": 2048, 00:24:09.619 "data_size": 63488 00:24:09.619 }, 00:24:09.619 { 00:24:09.619 "name": "BaseBdev2", 00:24:09.619 "uuid": "d8b2f645-ebd6-490b-aef8-f860cc045fb7", 00:24:09.619 "is_configured": true, 00:24:09.619 "data_offset": 2048, 00:24:09.619 "data_size": 63488 00:24:09.619 }, 00:24:09.619 { 00:24:09.619 "name": "BaseBdev3", 00:24:09.619 "uuid": "50ddcbf4-378b-46a9-bca7-66b11f211495", 00:24:09.619 "is_configured": true, 00:24:09.619 "data_offset": 2048, 00:24:09.619 "data_size": 63488 00:24:09.619 }, 00:24:09.619 { 00:24:09.619 "name": "BaseBdev4", 00:24:09.619 "uuid": "d871b4c1-e580-4947-93e5-150bec7c8b0b", 00:24:09.620 "is_configured": true, 00:24:09.620 "data_offset": 2048, 00:24:09.620 "data_size": 63488 00:24:09.620 } 00:24:09.620 ] 00:24:09.620 }' 00:24:09.620 00:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.620 00:50:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.559 00:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:10.559 00:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:10.559 00:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.559 00:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:10.559 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:10.559 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:10.559 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:10.559 [2024-07-25 00:50:33.194437] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:10.819 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:10.819 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:10.819 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.819 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:11.078 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:11.078 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:11.078 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:11.078 [2024-07-25 00:50:33.729321] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:11.336 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:11.336 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:11.336 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.336 00:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:11.595 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:11.595 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:11.595 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:11.852 [2024-07-25 00:50:34.260061] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:11.852 [2024-07-25 00:50:34.260114] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:24:11.852 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:11.852 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:11.852 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.852 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:12.111 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:12.111 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:12.111 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:12.111 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:12.111 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:12.111 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:12.111 BaseBdev2 00:24:12.369 00:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:12.369 00:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:12.369 00:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:12.369 00:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:12.369 00:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:12.369 00:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:12.369 00:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:12.369 00:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:12.628 [ 00:24:12.628 { 00:24:12.628 "name": "BaseBdev2", 00:24:12.628 "aliases": [ 00:24:12.628 "725f86f1-dfd8-4e94-8be5-5564aa17ae7e" 00:24:12.628 ], 00:24:12.628 "product_name": "Malloc disk", 00:24:12.628 "block_size": 512, 00:24:12.628 "num_blocks": 65536, 00:24:12.628 "uuid": "725f86f1-dfd8-4e94-8be5-5564aa17ae7e", 00:24:12.628 "assigned_rate_limits": { 00:24:12.628 "rw_ios_per_sec": 0, 00:24:12.628 "rw_mbytes_per_sec": 0, 00:24:12.628 "r_mbytes_per_sec": 0, 00:24:12.628 "w_mbytes_per_sec": 0 00:24:12.628 }, 00:24:12.628 "claimed": false, 00:24:12.628 "zoned": false, 00:24:12.628 "supported_io_types": { 00:24:12.628 "read": true, 00:24:12.628 "write": true, 00:24:12.628 "unmap": true, 00:24:12.628 "flush": true, 00:24:12.628 "reset": true, 00:24:12.628 "nvme_admin": false, 00:24:12.628 "nvme_io": false, 00:24:12.628 "nvme_io_md": false, 00:24:12.628 "write_zeroes": true, 00:24:12.628 "zcopy": true, 00:24:12.628 "get_zone_info": false, 00:24:12.628 "zone_management": false, 00:24:12.628 "zone_append": false, 00:24:12.628 "compare": false, 00:24:12.628 "compare_and_write": false, 00:24:12.628 "abort": true, 00:24:12.628 "seek_hole": false, 00:24:12.628 "seek_data": false, 00:24:12.628 "copy": true, 00:24:12.628 "nvme_iov_md": false 00:24:12.628 }, 00:24:12.628 "memory_domains": [ 00:24:12.628 { 00:24:12.628 "dma_device_id": "system", 00:24:12.628 "dma_device_type": 1 00:24:12.628 }, 00:24:12.628 { 00:24:12.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.628 "dma_device_type": 2 00:24:12.628 } 00:24:12.628 ], 00:24:12.628 "driver_specific": {} 00:24:12.628 } 00:24:12.628 ] 00:24:12.628 00:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:12.628 00:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:12.628 00:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:12.628 00:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:12.886 BaseBdev3 00:24:12.886 00:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:12.886 00:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:12.886 00:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:12.886 00:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:12.886 00:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:12.886 00:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:12.886 00:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:13.143 00:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:13.143 [ 00:24:13.143 { 00:24:13.143 "name": "BaseBdev3", 00:24:13.143 "aliases": [ 00:24:13.143 "c0e0ad1d-00fa-451b-aaf1-12ca69936d39" 00:24:13.143 ], 00:24:13.143 "product_name": "Malloc disk", 00:24:13.143 "block_size": 512, 00:24:13.143 "num_blocks": 65536, 00:24:13.143 "uuid": "c0e0ad1d-00fa-451b-aaf1-12ca69936d39", 00:24:13.143 "assigned_rate_limits": { 00:24:13.143 "rw_ios_per_sec": 0, 00:24:13.143 "rw_mbytes_per_sec": 0, 00:24:13.143 "r_mbytes_per_sec": 0, 00:24:13.143 "w_mbytes_per_sec": 0 00:24:13.143 }, 00:24:13.143 "claimed": false, 00:24:13.143 "zoned": false, 00:24:13.143 "supported_io_types": { 00:24:13.143 "read": true, 00:24:13.143 "write": true, 00:24:13.143 "unmap": true, 00:24:13.143 "flush": true, 00:24:13.143 "reset": true, 00:24:13.143 "nvme_admin": false, 00:24:13.143 "nvme_io": false, 00:24:13.143 "nvme_io_md": false, 00:24:13.143 "write_zeroes": true, 00:24:13.143 "zcopy": true, 00:24:13.143 "get_zone_info": false, 00:24:13.143 "zone_management": false, 00:24:13.143 "zone_append": false, 00:24:13.143 "compare": false, 00:24:13.143 "compare_and_write": false, 00:24:13.143 "abort": true, 00:24:13.143 "seek_hole": false, 00:24:13.143 "seek_data": false, 00:24:13.143 "copy": true, 00:24:13.143 "nvme_iov_md": false 00:24:13.143 }, 00:24:13.143 "memory_domains": [ 00:24:13.143 { 00:24:13.143 "dma_device_id": "system", 00:24:13.143 "dma_device_type": 1 00:24:13.143 }, 00:24:13.143 { 00:24:13.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.143 "dma_device_type": 2 00:24:13.143 } 00:24:13.143 ], 00:24:13.143 "driver_specific": {} 00:24:13.143 } 00:24:13.143 ] 00:24:13.143 00:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:13.143 00:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:13.143 00:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:13.143 00:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:13.401 BaseBdev4 00:24:13.401 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:13.401 00:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:13.401 00:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:13.401 00:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:13.401 00:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:13.401 00:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:13.401 00:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:13.660 00:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:13.919 [ 00:24:13.919 { 00:24:13.919 "name": "BaseBdev4", 00:24:13.919 "aliases": [ 00:24:13.919 "e7c70893-ae9f-47ba-a382-0a7c15e391c8" 00:24:13.919 ], 00:24:13.919 "product_name": "Malloc disk", 00:24:13.919 "block_size": 512, 00:24:13.919 "num_blocks": 65536, 00:24:13.919 "uuid": "e7c70893-ae9f-47ba-a382-0a7c15e391c8", 00:24:13.919 "assigned_rate_limits": { 00:24:13.919 "rw_ios_per_sec": 0, 00:24:13.919 "rw_mbytes_per_sec": 0, 00:24:13.919 "r_mbytes_per_sec": 0, 00:24:13.919 "w_mbytes_per_sec": 0 00:24:13.919 }, 00:24:13.919 "claimed": false, 00:24:13.919 "zoned": false, 00:24:13.919 "supported_io_types": { 00:24:13.919 "read": true, 00:24:13.919 "write": true, 00:24:13.919 "unmap": true, 00:24:13.919 "flush": true, 00:24:13.919 "reset": true, 00:24:13.919 "nvme_admin": false, 00:24:13.919 "nvme_io": false, 00:24:13.919 "nvme_io_md": false, 00:24:13.919 "write_zeroes": true, 00:24:13.919 "zcopy": true, 00:24:13.919 "get_zone_info": false, 00:24:13.919 "zone_management": false, 00:24:13.919 "zone_append": false, 00:24:13.919 "compare": false, 00:24:13.919 "compare_and_write": false, 00:24:13.919 "abort": true, 00:24:13.919 "seek_hole": false, 00:24:13.919 "seek_data": false, 00:24:13.919 "copy": true, 00:24:13.919 "nvme_iov_md": false 00:24:13.919 }, 00:24:13.919 "memory_domains": [ 00:24:13.919 { 00:24:13.919 "dma_device_id": "system", 00:24:13.919 "dma_device_type": 1 00:24:13.919 }, 00:24:13.919 { 00:24:13.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.919 "dma_device_type": 2 00:24:13.919 } 00:24:13.919 ], 00:24:13.919 "driver_specific": {} 00:24:13.919 } 00:24:13.919 ] 00:24:13.919 00:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:13.920 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:13.920 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:13.920 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:14.178 [2024-07-25 00:50:36.684783] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:14.178 [2024-07-25 00:50:36.684849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:14.178 [2024-07-25 00:50:36.684869] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:14.178 [2024-07-25 00:50:36.686814] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:14.178 [2024-07-25 00:50:36.686890] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.178 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.437 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.437 "name": "Existed_Raid", 00:24:14.437 "uuid": "f51f93d7-5f80-426f-84db-9a78c1e979ef", 00:24:14.437 "strip_size_kb": 64, 00:24:14.437 "state": "configuring", 00:24:14.437 "raid_level": "raid0", 00:24:14.437 "superblock": true, 00:24:14.437 "num_base_bdevs": 4, 00:24:14.437 "num_base_bdevs_discovered": 3, 00:24:14.437 "num_base_bdevs_operational": 4, 00:24:14.437 "base_bdevs_list": [ 00:24:14.437 { 00:24:14.437 "name": "BaseBdev1", 00:24:14.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.437 "is_configured": false, 00:24:14.437 "data_offset": 0, 00:24:14.437 "data_size": 0 00:24:14.437 }, 00:24:14.437 { 00:24:14.437 "name": "BaseBdev2", 00:24:14.437 "uuid": "725f86f1-dfd8-4e94-8be5-5564aa17ae7e", 00:24:14.437 "is_configured": true, 00:24:14.437 "data_offset": 2048, 00:24:14.437 "data_size": 63488 00:24:14.437 }, 00:24:14.437 { 00:24:14.437 "name": "BaseBdev3", 00:24:14.437 "uuid": "c0e0ad1d-00fa-451b-aaf1-12ca69936d39", 00:24:14.437 "is_configured": true, 00:24:14.437 "data_offset": 2048, 00:24:14.437 "data_size": 63488 00:24:14.437 }, 00:24:14.437 { 00:24:14.437 "name": "BaseBdev4", 00:24:14.437 "uuid": "e7c70893-ae9f-47ba-a382-0a7c15e391c8", 00:24:14.437 "is_configured": true, 00:24:14.437 "data_offset": 2048, 00:24:14.437 "data_size": 63488 00:24:14.437 } 00:24:14.437 ] 00:24:14.437 }' 00:24:14.437 00:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.437 00:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:15.004 [2024-07-25 00:50:37.576902] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.004 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:15.263 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:15.263 "name": "Existed_Raid", 00:24:15.263 "uuid": "f51f93d7-5f80-426f-84db-9a78c1e979ef", 00:24:15.263 "strip_size_kb": 64, 00:24:15.263 "state": "configuring", 00:24:15.263 "raid_level": "raid0", 00:24:15.263 "superblock": true, 00:24:15.263 "num_base_bdevs": 4, 00:24:15.263 "num_base_bdevs_discovered": 2, 00:24:15.263 "num_base_bdevs_operational": 4, 00:24:15.263 "base_bdevs_list": [ 00:24:15.263 { 00:24:15.263 "name": "BaseBdev1", 00:24:15.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.263 "is_configured": false, 00:24:15.263 "data_offset": 0, 00:24:15.263 "data_size": 0 00:24:15.263 }, 00:24:15.263 { 00:24:15.263 "name": null, 00:24:15.263 "uuid": "725f86f1-dfd8-4e94-8be5-5564aa17ae7e", 00:24:15.263 "is_configured": false, 00:24:15.263 "data_offset": 2048, 00:24:15.263 "data_size": 63488 00:24:15.263 }, 00:24:15.263 { 00:24:15.263 "name": "BaseBdev3", 00:24:15.263 "uuid": "c0e0ad1d-00fa-451b-aaf1-12ca69936d39", 00:24:15.263 "is_configured": true, 00:24:15.263 "data_offset": 2048, 00:24:15.263 "data_size": 63488 00:24:15.263 }, 00:24:15.263 { 00:24:15.263 "name": "BaseBdev4", 00:24:15.263 "uuid": "e7c70893-ae9f-47ba-a382-0a7c15e391c8", 00:24:15.263 "is_configured": true, 00:24:15.263 "data_offset": 2048, 00:24:15.263 "data_size": 63488 00:24:15.263 } 00:24:15.263 ] 00:24:15.263 }' 00:24:15.263 00:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:15.263 00:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.831 00:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.832 00:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:16.090 00:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:16.090 00:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:16.349 [2024-07-25 00:50:38.787640] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:16.349 BaseBdev1 00:24:16.349 00:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:16.349 00:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:16.349 00:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:16.349 00:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:16.349 00:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:16.349 00:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:16.349 00:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:16.349 00:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:16.607 [ 00:24:16.607 { 00:24:16.607 "name": "BaseBdev1", 00:24:16.607 "aliases": [ 00:24:16.607 "0389f999-a941-4d4c-bbaa-3bbd2b9aff03" 00:24:16.607 ], 00:24:16.607 "product_name": "Malloc disk", 00:24:16.607 "block_size": 512, 00:24:16.607 "num_blocks": 65536, 00:24:16.607 "uuid": "0389f999-a941-4d4c-bbaa-3bbd2b9aff03", 00:24:16.607 "assigned_rate_limits": { 00:24:16.607 "rw_ios_per_sec": 0, 00:24:16.607 "rw_mbytes_per_sec": 0, 00:24:16.607 "r_mbytes_per_sec": 0, 00:24:16.607 "w_mbytes_per_sec": 0 00:24:16.607 }, 00:24:16.607 "claimed": true, 00:24:16.607 "claim_type": "exclusive_write", 00:24:16.607 "zoned": false, 00:24:16.607 "supported_io_types": { 00:24:16.607 "read": true, 00:24:16.607 "write": true, 00:24:16.607 "unmap": true, 00:24:16.607 "flush": true, 00:24:16.607 "reset": true, 00:24:16.607 "nvme_admin": false, 00:24:16.607 "nvme_io": false, 00:24:16.607 "nvme_io_md": false, 00:24:16.607 "write_zeroes": true, 00:24:16.607 "zcopy": true, 00:24:16.607 "get_zone_info": false, 00:24:16.607 "zone_management": false, 00:24:16.607 "zone_append": false, 00:24:16.607 "compare": false, 00:24:16.607 "compare_and_write": false, 00:24:16.607 "abort": true, 00:24:16.607 "seek_hole": false, 00:24:16.607 "seek_data": false, 00:24:16.607 "copy": true, 00:24:16.607 "nvme_iov_md": false 00:24:16.607 }, 00:24:16.607 "memory_domains": [ 00:24:16.607 { 00:24:16.607 "dma_device_id": "system", 00:24:16.607 "dma_device_type": 1 00:24:16.607 }, 00:24:16.607 { 00:24:16.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.607 "dma_device_type": 2 00:24:16.607 } 00:24:16.607 ], 00:24:16.607 "driver_specific": {} 00:24:16.607 } 00:24:16.607 ] 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.608 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:16.866 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.866 "name": "Existed_Raid", 00:24:16.866 "uuid": "f51f93d7-5f80-426f-84db-9a78c1e979ef", 00:24:16.866 "strip_size_kb": 64, 00:24:16.866 "state": "configuring", 00:24:16.866 "raid_level": "raid0", 00:24:16.866 "superblock": true, 00:24:16.866 "num_base_bdevs": 4, 00:24:16.866 "num_base_bdevs_discovered": 3, 00:24:16.866 "num_base_bdevs_operational": 4, 00:24:16.866 "base_bdevs_list": [ 00:24:16.866 { 00:24:16.866 "name": "BaseBdev1", 00:24:16.867 "uuid": "0389f999-a941-4d4c-bbaa-3bbd2b9aff03", 00:24:16.867 "is_configured": true, 00:24:16.867 "data_offset": 2048, 00:24:16.867 "data_size": 63488 00:24:16.867 }, 00:24:16.867 { 00:24:16.867 "name": null, 00:24:16.867 "uuid": "725f86f1-dfd8-4e94-8be5-5564aa17ae7e", 00:24:16.867 "is_configured": false, 00:24:16.867 "data_offset": 2048, 00:24:16.867 "data_size": 63488 00:24:16.867 }, 00:24:16.867 { 00:24:16.867 "name": "BaseBdev3", 00:24:16.867 "uuid": "c0e0ad1d-00fa-451b-aaf1-12ca69936d39", 00:24:16.867 "is_configured": true, 00:24:16.867 "data_offset": 2048, 00:24:16.867 "data_size": 63488 00:24:16.867 }, 00:24:16.867 { 00:24:16.867 "name": "BaseBdev4", 00:24:16.867 "uuid": "e7c70893-ae9f-47ba-a382-0a7c15e391c8", 00:24:16.867 "is_configured": true, 00:24:16.867 "data_offset": 2048, 00:24:16.867 "data_size": 63488 00:24:16.867 } 00:24:16.867 ] 00:24:16.867 }' 00:24:16.867 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.867 00:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.434 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:17.434 00:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.692 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:17.692 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:17.951 [2024-07-25 00:50:40.399956] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:17.951 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:17.951 "name": "Existed_Raid", 00:24:17.951 "uuid": "f51f93d7-5f80-426f-84db-9a78c1e979ef", 00:24:17.951 "strip_size_kb": 64, 00:24:17.951 "state": "configuring", 00:24:17.951 "raid_level": "raid0", 00:24:17.952 "superblock": true, 00:24:17.952 "num_base_bdevs": 4, 00:24:17.952 "num_base_bdevs_discovered": 2, 00:24:17.952 "num_base_bdevs_operational": 4, 00:24:17.952 "base_bdevs_list": [ 00:24:17.952 { 00:24:17.952 "name": "BaseBdev1", 00:24:17.952 "uuid": "0389f999-a941-4d4c-bbaa-3bbd2b9aff03", 00:24:17.952 "is_configured": true, 00:24:17.952 "data_offset": 2048, 00:24:17.952 "data_size": 63488 00:24:17.952 }, 00:24:17.952 { 00:24:17.952 "name": null, 00:24:17.952 "uuid": "725f86f1-dfd8-4e94-8be5-5564aa17ae7e", 00:24:17.952 "is_configured": false, 00:24:17.952 "data_offset": 2048, 00:24:17.952 "data_size": 63488 00:24:17.952 }, 00:24:17.952 { 00:24:17.952 "name": null, 00:24:17.952 "uuid": "c0e0ad1d-00fa-451b-aaf1-12ca69936d39", 00:24:17.952 "is_configured": false, 00:24:17.952 "data_offset": 2048, 00:24:17.952 "data_size": 63488 00:24:17.952 }, 00:24:17.952 { 00:24:17.952 "name": "BaseBdev4", 00:24:17.952 "uuid": "e7c70893-ae9f-47ba-a382-0a7c15e391c8", 00:24:17.952 "is_configured": true, 00:24:17.952 "data_offset": 2048, 00:24:17.952 "data_size": 63488 00:24:17.952 } 00:24:17.952 ] 00:24:17.952 }' 00:24:17.952 00:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:17.952 00:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.889 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:18.889 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.889 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:18.889 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:19.148 [2024-07-25 00:50:41.740226] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.148 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.407 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:19.407 "name": "Existed_Raid", 00:24:19.407 "uuid": "f51f93d7-5f80-426f-84db-9a78c1e979ef", 00:24:19.407 "strip_size_kb": 64, 00:24:19.407 "state": "configuring", 00:24:19.408 "raid_level": "raid0", 00:24:19.408 "superblock": true, 00:24:19.408 "num_base_bdevs": 4, 00:24:19.408 "num_base_bdevs_discovered": 3, 00:24:19.408 "num_base_bdevs_operational": 4, 00:24:19.408 "base_bdevs_list": [ 00:24:19.408 { 00:24:19.408 "name": "BaseBdev1", 00:24:19.408 "uuid": "0389f999-a941-4d4c-bbaa-3bbd2b9aff03", 00:24:19.408 "is_configured": true, 00:24:19.408 "data_offset": 2048, 00:24:19.408 "data_size": 63488 00:24:19.408 }, 00:24:19.408 { 00:24:19.408 "name": null, 00:24:19.408 "uuid": "725f86f1-dfd8-4e94-8be5-5564aa17ae7e", 00:24:19.408 "is_configured": false, 00:24:19.408 "data_offset": 2048, 00:24:19.408 "data_size": 63488 00:24:19.408 }, 00:24:19.408 { 00:24:19.408 "name": "BaseBdev3", 00:24:19.408 "uuid": "c0e0ad1d-00fa-451b-aaf1-12ca69936d39", 00:24:19.408 "is_configured": true, 00:24:19.408 "data_offset": 2048, 00:24:19.408 "data_size": 63488 00:24:19.408 }, 00:24:19.408 { 00:24:19.408 "name": "BaseBdev4", 00:24:19.408 "uuid": "e7c70893-ae9f-47ba-a382-0a7c15e391c8", 00:24:19.408 "is_configured": true, 00:24:19.408 "data_offset": 2048, 00:24:19.408 "data_size": 63488 00:24:19.408 } 00:24:19.408 ] 00:24:19.408 }' 00:24:19.408 00:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:19.408 00:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.976 00:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:19.976 00:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.246 00:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:20.246 00:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:20.514 [2024-07-25 00:50:43.132460] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.773 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.032 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:21.032 "name": "Existed_Raid", 00:24:21.032 "uuid": "f51f93d7-5f80-426f-84db-9a78c1e979ef", 00:24:21.032 "strip_size_kb": 64, 00:24:21.032 "state": "configuring", 00:24:21.032 "raid_level": "raid0", 00:24:21.032 "superblock": true, 00:24:21.032 "num_base_bdevs": 4, 00:24:21.032 "num_base_bdevs_discovered": 2, 00:24:21.032 "num_base_bdevs_operational": 4, 00:24:21.032 "base_bdevs_list": [ 00:24:21.032 { 00:24:21.032 "name": null, 00:24:21.032 "uuid": "0389f999-a941-4d4c-bbaa-3bbd2b9aff03", 00:24:21.032 "is_configured": false, 00:24:21.032 "data_offset": 2048, 00:24:21.032 "data_size": 63488 00:24:21.032 }, 00:24:21.032 { 00:24:21.032 "name": null, 00:24:21.032 "uuid": "725f86f1-dfd8-4e94-8be5-5564aa17ae7e", 00:24:21.032 "is_configured": false, 00:24:21.032 "data_offset": 2048, 00:24:21.032 "data_size": 63488 00:24:21.032 }, 00:24:21.032 { 00:24:21.032 "name": "BaseBdev3", 00:24:21.032 "uuid": "c0e0ad1d-00fa-451b-aaf1-12ca69936d39", 00:24:21.032 "is_configured": true, 00:24:21.032 "data_offset": 2048, 00:24:21.032 "data_size": 63488 00:24:21.032 }, 00:24:21.032 { 00:24:21.032 "name": "BaseBdev4", 00:24:21.032 "uuid": "e7c70893-ae9f-47ba-a382-0a7c15e391c8", 00:24:21.032 "is_configured": true, 00:24:21.032 "data_offset": 2048, 00:24:21.032 "data_size": 63488 00:24:21.032 } 00:24:21.032 ] 00:24:21.032 }' 00:24:21.032 00:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:21.032 00:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.600 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.600 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:21.860 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:21.860 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:21.860 [2024-07-25 00:50:44.505570] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.119 "name": "Existed_Raid", 00:24:22.119 "uuid": "f51f93d7-5f80-426f-84db-9a78c1e979ef", 00:24:22.119 "strip_size_kb": 64, 00:24:22.119 "state": "configuring", 00:24:22.119 "raid_level": "raid0", 00:24:22.119 "superblock": true, 00:24:22.119 "num_base_bdevs": 4, 00:24:22.119 "num_base_bdevs_discovered": 3, 00:24:22.119 "num_base_bdevs_operational": 4, 00:24:22.119 "base_bdevs_list": [ 00:24:22.119 { 00:24:22.119 "name": null, 00:24:22.119 "uuid": "0389f999-a941-4d4c-bbaa-3bbd2b9aff03", 00:24:22.119 "is_configured": false, 00:24:22.119 "data_offset": 2048, 00:24:22.119 "data_size": 63488 00:24:22.119 }, 00:24:22.119 { 00:24:22.119 "name": "BaseBdev2", 00:24:22.119 "uuid": "725f86f1-dfd8-4e94-8be5-5564aa17ae7e", 00:24:22.119 "is_configured": true, 00:24:22.119 "data_offset": 2048, 00:24:22.119 "data_size": 63488 00:24:22.119 }, 00:24:22.119 { 00:24:22.119 "name": "BaseBdev3", 00:24:22.119 "uuid": "c0e0ad1d-00fa-451b-aaf1-12ca69936d39", 00:24:22.119 "is_configured": true, 00:24:22.119 "data_offset": 2048, 00:24:22.119 "data_size": 63488 00:24:22.119 }, 00:24:22.119 { 00:24:22.119 "name": "BaseBdev4", 00:24:22.119 "uuid": "e7c70893-ae9f-47ba-a382-0a7c15e391c8", 00:24:22.119 "is_configured": true, 00:24:22.119 "data_offset": 2048, 00:24:22.119 "data_size": 63488 00:24:22.119 } 00:24:22.119 ] 00:24:22.119 }' 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.119 00:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.688 00:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.688 00:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:22.947 00:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:22.947 00:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.947 00:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:23.206 00:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0389f999-a941-4d4c-bbaa-3bbd2b9aff03 00:24:23.465 [2024-07-25 00:50:45.897153] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:23.465 [2024-07-25 00:50:45.897387] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:24:23.465 [2024-07-25 00:50:45.897400] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:23.465 [2024-07-25 00:50:45.897505] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:23.465 [2024-07-25 00:50:45.897813] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:24:23.465 [2024-07-25 00:50:45.897834] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:24:23.465 [2024-07-25 00:50:45.897980] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.465 NewBaseBdev 00:24:23.465 00:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:23.466 00:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:24:23.466 00:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:23.466 00:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:23.466 00:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:23.466 00:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:23.466 00:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:23.466 00:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:23.725 [ 00:24:23.725 { 00:24:23.725 "name": "NewBaseBdev", 00:24:23.725 "aliases": [ 00:24:23.725 "0389f999-a941-4d4c-bbaa-3bbd2b9aff03" 00:24:23.725 ], 00:24:23.725 "product_name": "Malloc disk", 00:24:23.725 "block_size": 512, 00:24:23.725 "num_blocks": 65536, 00:24:23.725 "uuid": "0389f999-a941-4d4c-bbaa-3bbd2b9aff03", 00:24:23.725 "assigned_rate_limits": { 00:24:23.725 "rw_ios_per_sec": 0, 00:24:23.725 "rw_mbytes_per_sec": 0, 00:24:23.725 "r_mbytes_per_sec": 0, 00:24:23.725 "w_mbytes_per_sec": 0 00:24:23.725 }, 00:24:23.725 "claimed": true, 00:24:23.725 "claim_type": "exclusive_write", 00:24:23.725 "zoned": false, 00:24:23.725 "supported_io_types": { 00:24:23.725 "read": true, 00:24:23.725 "write": true, 00:24:23.725 "unmap": true, 00:24:23.725 "flush": true, 00:24:23.725 "reset": true, 00:24:23.725 "nvme_admin": false, 00:24:23.725 "nvme_io": false, 00:24:23.725 "nvme_io_md": false, 00:24:23.725 "write_zeroes": true, 00:24:23.725 "zcopy": true, 00:24:23.725 "get_zone_info": false, 00:24:23.725 "zone_management": false, 00:24:23.725 "zone_append": false, 00:24:23.725 "compare": false, 00:24:23.725 "compare_and_write": false, 00:24:23.725 "abort": true, 00:24:23.725 "seek_hole": false, 00:24:23.725 "seek_data": false, 00:24:23.725 "copy": true, 00:24:23.725 "nvme_iov_md": false 00:24:23.725 }, 00:24:23.725 "memory_domains": [ 00:24:23.725 { 00:24:23.725 "dma_device_id": "system", 00:24:23.725 "dma_device_type": 1 00:24:23.725 }, 00:24:23.725 { 00:24:23.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.725 "dma_device_type": 2 00:24:23.725 } 00:24:23.725 ], 00:24:23.725 "driver_specific": {} 00:24:23.725 } 00:24:23.725 ] 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.725 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.984 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.984 "name": "Existed_Raid", 00:24:23.984 "uuid": "f51f93d7-5f80-426f-84db-9a78c1e979ef", 00:24:23.984 "strip_size_kb": 64, 00:24:23.984 "state": "online", 00:24:23.984 "raid_level": "raid0", 00:24:23.984 "superblock": true, 00:24:23.984 "num_base_bdevs": 4, 00:24:23.984 "num_base_bdevs_discovered": 4, 00:24:23.984 "num_base_bdevs_operational": 4, 00:24:23.984 "base_bdevs_list": [ 00:24:23.984 { 00:24:23.984 "name": "NewBaseBdev", 00:24:23.984 "uuid": "0389f999-a941-4d4c-bbaa-3bbd2b9aff03", 00:24:23.984 "is_configured": true, 00:24:23.984 "data_offset": 2048, 00:24:23.984 "data_size": 63488 00:24:23.984 }, 00:24:23.984 { 00:24:23.984 "name": "BaseBdev2", 00:24:23.984 "uuid": "725f86f1-dfd8-4e94-8be5-5564aa17ae7e", 00:24:23.984 "is_configured": true, 00:24:23.984 "data_offset": 2048, 00:24:23.984 "data_size": 63488 00:24:23.984 }, 00:24:23.984 { 00:24:23.984 "name": "BaseBdev3", 00:24:23.984 "uuid": "c0e0ad1d-00fa-451b-aaf1-12ca69936d39", 00:24:23.984 "is_configured": true, 00:24:23.984 "data_offset": 2048, 00:24:23.984 "data_size": 63488 00:24:23.984 }, 00:24:23.984 { 00:24:23.984 "name": "BaseBdev4", 00:24:23.984 "uuid": "e7c70893-ae9f-47ba-a382-0a7c15e391c8", 00:24:23.984 "is_configured": true, 00:24:23.984 "data_offset": 2048, 00:24:23.984 "data_size": 63488 00:24:23.984 } 00:24:23.984 ] 00:24:23.984 }' 00:24:23.984 00:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.984 00:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.553 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:24.553 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:24.553 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:24.553 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:24.553 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:24.553 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:24.553 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:24.553 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:24.813 [2024-07-25 00:50:47.313812] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:24.813 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:24.813 "name": "Existed_Raid", 00:24:24.813 "aliases": [ 00:24:24.813 "f51f93d7-5f80-426f-84db-9a78c1e979ef" 00:24:24.813 ], 00:24:24.813 "product_name": "Raid Volume", 00:24:24.813 "block_size": 512, 00:24:24.813 "num_blocks": 253952, 00:24:24.813 "uuid": "f51f93d7-5f80-426f-84db-9a78c1e979ef", 00:24:24.813 "assigned_rate_limits": { 00:24:24.813 "rw_ios_per_sec": 0, 00:24:24.813 "rw_mbytes_per_sec": 0, 00:24:24.813 "r_mbytes_per_sec": 0, 00:24:24.813 "w_mbytes_per_sec": 0 00:24:24.813 }, 00:24:24.813 "claimed": false, 00:24:24.813 "zoned": false, 00:24:24.813 "supported_io_types": { 00:24:24.813 "read": true, 00:24:24.813 "write": true, 00:24:24.813 "unmap": true, 00:24:24.813 "flush": true, 00:24:24.813 "reset": true, 00:24:24.813 "nvme_admin": false, 00:24:24.813 "nvme_io": false, 00:24:24.813 "nvme_io_md": false, 00:24:24.813 "write_zeroes": true, 00:24:24.813 "zcopy": false, 00:24:24.813 "get_zone_info": false, 00:24:24.813 "zone_management": false, 00:24:24.813 "zone_append": false, 00:24:24.813 "compare": false, 00:24:24.813 "compare_and_write": false, 00:24:24.813 "abort": false, 00:24:24.813 "seek_hole": false, 00:24:24.813 "seek_data": false, 00:24:24.813 "copy": false, 00:24:24.813 "nvme_iov_md": false 00:24:24.813 }, 00:24:24.813 "memory_domains": [ 00:24:24.813 { 00:24:24.813 "dma_device_id": "system", 00:24:24.813 "dma_device_type": 1 00:24:24.813 }, 00:24:24.813 { 00:24:24.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.813 "dma_device_type": 2 00:24:24.813 }, 00:24:24.813 { 00:24:24.813 "dma_device_id": "system", 00:24:24.813 "dma_device_type": 1 00:24:24.813 }, 00:24:24.813 { 00:24:24.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.813 "dma_device_type": 2 00:24:24.813 }, 00:24:24.813 { 00:24:24.813 "dma_device_id": "system", 00:24:24.813 "dma_device_type": 1 00:24:24.813 }, 00:24:24.813 { 00:24:24.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.813 "dma_device_type": 2 00:24:24.813 }, 00:24:24.813 { 00:24:24.813 "dma_device_id": "system", 00:24:24.813 "dma_device_type": 1 00:24:24.813 }, 00:24:24.813 { 00:24:24.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.813 "dma_device_type": 2 00:24:24.813 } 00:24:24.813 ], 00:24:24.813 "driver_specific": { 00:24:24.813 "raid": { 00:24:24.813 "uuid": "f51f93d7-5f80-426f-84db-9a78c1e979ef", 00:24:24.813 "strip_size_kb": 64, 00:24:24.813 "state": "online", 00:24:24.813 "raid_level": "raid0", 00:24:24.813 "superblock": true, 00:24:24.813 "num_base_bdevs": 4, 00:24:24.813 "num_base_bdevs_discovered": 4, 00:24:24.813 "num_base_bdevs_operational": 4, 00:24:24.813 "base_bdevs_list": [ 00:24:24.813 { 00:24:24.813 "name": "NewBaseBdev", 00:24:24.813 "uuid": "0389f999-a941-4d4c-bbaa-3bbd2b9aff03", 00:24:24.813 "is_configured": true, 00:24:24.813 "data_offset": 2048, 00:24:24.813 "data_size": 63488 00:24:24.813 }, 00:24:24.813 { 00:24:24.813 "name": "BaseBdev2", 00:24:24.813 "uuid": "725f86f1-dfd8-4e94-8be5-5564aa17ae7e", 00:24:24.813 "is_configured": true, 00:24:24.813 "data_offset": 2048, 00:24:24.813 "data_size": 63488 00:24:24.813 }, 00:24:24.813 { 00:24:24.813 "name": "BaseBdev3", 00:24:24.813 "uuid": "c0e0ad1d-00fa-451b-aaf1-12ca69936d39", 00:24:24.813 "is_configured": true, 00:24:24.813 "data_offset": 2048, 00:24:24.813 "data_size": 63488 00:24:24.813 }, 00:24:24.813 { 00:24:24.813 "name": "BaseBdev4", 00:24:24.813 "uuid": "e7c70893-ae9f-47ba-a382-0a7c15e391c8", 00:24:24.813 "is_configured": true, 00:24:24.813 "data_offset": 2048, 00:24:24.813 "data_size": 63488 00:24:24.813 } 00:24:24.813 ] 00:24:24.813 } 00:24:24.813 } 00:24:24.813 }' 00:24:24.813 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:24.813 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:24.813 BaseBdev2 00:24:24.813 BaseBdev3 00:24:24.813 BaseBdev4' 00:24:24.813 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:24.813 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:24.813 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:25.073 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:25.073 "name": "NewBaseBdev", 00:24:25.073 "aliases": [ 00:24:25.073 "0389f999-a941-4d4c-bbaa-3bbd2b9aff03" 00:24:25.073 ], 00:24:25.073 "product_name": "Malloc disk", 00:24:25.073 "block_size": 512, 00:24:25.073 "num_blocks": 65536, 00:24:25.073 "uuid": "0389f999-a941-4d4c-bbaa-3bbd2b9aff03", 00:24:25.073 "assigned_rate_limits": { 00:24:25.073 "rw_ios_per_sec": 0, 00:24:25.073 "rw_mbytes_per_sec": 0, 00:24:25.073 "r_mbytes_per_sec": 0, 00:24:25.073 "w_mbytes_per_sec": 0 00:24:25.073 }, 00:24:25.073 "claimed": true, 00:24:25.073 "claim_type": "exclusive_write", 00:24:25.073 "zoned": false, 00:24:25.073 "supported_io_types": { 00:24:25.073 "read": true, 00:24:25.073 "write": true, 00:24:25.073 "unmap": true, 00:24:25.073 "flush": true, 00:24:25.073 "reset": true, 00:24:25.073 "nvme_admin": false, 00:24:25.073 "nvme_io": false, 00:24:25.073 "nvme_io_md": false, 00:24:25.073 "write_zeroes": true, 00:24:25.073 "zcopy": true, 00:24:25.073 "get_zone_info": false, 00:24:25.073 "zone_management": false, 00:24:25.073 "zone_append": false, 00:24:25.073 "compare": false, 00:24:25.073 "compare_and_write": false, 00:24:25.073 "abort": true, 00:24:25.073 "seek_hole": false, 00:24:25.073 "seek_data": false, 00:24:25.073 "copy": true, 00:24:25.073 "nvme_iov_md": false 00:24:25.073 }, 00:24:25.073 "memory_domains": [ 00:24:25.073 { 00:24:25.073 "dma_device_id": "system", 00:24:25.073 "dma_device_type": 1 00:24:25.073 }, 00:24:25.073 { 00:24:25.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.073 "dma_device_type": 2 00:24:25.073 } 00:24:25.073 ], 00:24:25.073 "driver_specific": {} 00:24:25.073 }' 00:24:25.073 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:25.073 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:25.073 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:25.073 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:25.073 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:25.073 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:25.073 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:25.333 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:25.333 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:25.333 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:25.333 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:25.333 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:25.333 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:25.333 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:25.333 00:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:25.592 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:25.592 "name": "BaseBdev2", 00:24:25.592 "aliases": [ 00:24:25.592 "725f86f1-dfd8-4e94-8be5-5564aa17ae7e" 00:24:25.592 ], 00:24:25.592 "product_name": "Malloc disk", 00:24:25.592 "block_size": 512, 00:24:25.592 "num_blocks": 65536, 00:24:25.592 "uuid": "725f86f1-dfd8-4e94-8be5-5564aa17ae7e", 00:24:25.592 "assigned_rate_limits": { 00:24:25.592 "rw_ios_per_sec": 0, 00:24:25.592 "rw_mbytes_per_sec": 0, 00:24:25.592 "r_mbytes_per_sec": 0, 00:24:25.592 "w_mbytes_per_sec": 0 00:24:25.592 }, 00:24:25.592 "claimed": true, 00:24:25.592 "claim_type": "exclusive_write", 00:24:25.592 "zoned": false, 00:24:25.592 "supported_io_types": { 00:24:25.592 "read": true, 00:24:25.592 "write": true, 00:24:25.592 "unmap": true, 00:24:25.592 "flush": true, 00:24:25.592 "reset": true, 00:24:25.592 "nvme_admin": false, 00:24:25.592 "nvme_io": false, 00:24:25.592 "nvme_io_md": false, 00:24:25.592 "write_zeroes": true, 00:24:25.592 "zcopy": true, 00:24:25.592 "get_zone_info": false, 00:24:25.592 "zone_management": false, 00:24:25.592 "zone_append": false, 00:24:25.592 "compare": false, 00:24:25.592 "compare_and_write": false, 00:24:25.592 "abort": true, 00:24:25.592 "seek_hole": false, 00:24:25.592 "seek_data": false, 00:24:25.592 "copy": true, 00:24:25.592 "nvme_iov_md": false 00:24:25.592 }, 00:24:25.592 "memory_domains": [ 00:24:25.592 { 00:24:25.592 "dma_device_id": "system", 00:24:25.592 "dma_device_type": 1 00:24:25.592 }, 00:24:25.592 { 00:24:25.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.592 "dma_device_type": 2 00:24:25.592 } 00:24:25.592 ], 00:24:25.592 "driver_specific": {} 00:24:25.592 }' 00:24:25.592 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:25.592 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:25.852 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:25.852 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:25.852 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:25.852 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:25.852 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:25.852 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:25.852 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:25.852 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:25.852 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:26.111 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:26.111 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:26.111 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:26.111 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:26.111 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:26.111 "name": "BaseBdev3", 00:24:26.111 "aliases": [ 00:24:26.111 "c0e0ad1d-00fa-451b-aaf1-12ca69936d39" 00:24:26.111 ], 00:24:26.111 "product_name": "Malloc disk", 00:24:26.111 "block_size": 512, 00:24:26.111 "num_blocks": 65536, 00:24:26.111 "uuid": "c0e0ad1d-00fa-451b-aaf1-12ca69936d39", 00:24:26.111 "assigned_rate_limits": { 00:24:26.111 "rw_ios_per_sec": 0, 00:24:26.111 "rw_mbytes_per_sec": 0, 00:24:26.111 "r_mbytes_per_sec": 0, 00:24:26.111 "w_mbytes_per_sec": 0 00:24:26.111 }, 00:24:26.111 "claimed": true, 00:24:26.111 "claim_type": "exclusive_write", 00:24:26.111 "zoned": false, 00:24:26.111 "supported_io_types": { 00:24:26.111 "read": true, 00:24:26.111 "write": true, 00:24:26.111 "unmap": true, 00:24:26.111 "flush": true, 00:24:26.111 "reset": true, 00:24:26.111 "nvme_admin": false, 00:24:26.111 "nvme_io": false, 00:24:26.111 "nvme_io_md": false, 00:24:26.111 "write_zeroes": true, 00:24:26.111 "zcopy": true, 00:24:26.111 "get_zone_info": false, 00:24:26.111 "zone_management": false, 00:24:26.111 "zone_append": false, 00:24:26.111 "compare": false, 00:24:26.111 "compare_and_write": false, 00:24:26.111 "abort": true, 00:24:26.111 "seek_hole": false, 00:24:26.111 "seek_data": false, 00:24:26.111 "copy": true, 00:24:26.111 "nvme_iov_md": false 00:24:26.111 }, 00:24:26.111 "memory_domains": [ 00:24:26.111 { 00:24:26.111 "dma_device_id": "system", 00:24:26.111 "dma_device_type": 1 00:24:26.111 }, 00:24:26.111 { 00:24:26.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.111 "dma_device_type": 2 00:24:26.111 } 00:24:26.111 ], 00:24:26.111 "driver_specific": {} 00:24:26.111 }' 00:24:26.111 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:26.112 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:26.370 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:26.370 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:26.370 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:26.370 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:26.370 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:26.370 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:26.370 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:26.371 00:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:26.630 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:26.630 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:26.630 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:26.630 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:26.630 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:26.630 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:26.630 "name": "BaseBdev4", 00:24:26.630 "aliases": [ 00:24:26.630 "e7c70893-ae9f-47ba-a382-0a7c15e391c8" 00:24:26.630 ], 00:24:26.630 "product_name": "Malloc disk", 00:24:26.630 "block_size": 512, 00:24:26.630 "num_blocks": 65536, 00:24:26.630 "uuid": "e7c70893-ae9f-47ba-a382-0a7c15e391c8", 00:24:26.630 "assigned_rate_limits": { 00:24:26.630 "rw_ios_per_sec": 0, 00:24:26.630 "rw_mbytes_per_sec": 0, 00:24:26.630 "r_mbytes_per_sec": 0, 00:24:26.630 "w_mbytes_per_sec": 0 00:24:26.630 }, 00:24:26.630 "claimed": true, 00:24:26.630 "claim_type": "exclusive_write", 00:24:26.630 "zoned": false, 00:24:26.630 "supported_io_types": { 00:24:26.630 "read": true, 00:24:26.630 "write": true, 00:24:26.630 "unmap": true, 00:24:26.630 "flush": true, 00:24:26.630 "reset": true, 00:24:26.630 "nvme_admin": false, 00:24:26.630 "nvme_io": false, 00:24:26.630 "nvme_io_md": false, 00:24:26.630 "write_zeroes": true, 00:24:26.630 "zcopy": true, 00:24:26.630 "get_zone_info": false, 00:24:26.630 "zone_management": false, 00:24:26.630 "zone_append": false, 00:24:26.630 "compare": false, 00:24:26.630 "compare_and_write": false, 00:24:26.630 "abort": true, 00:24:26.630 "seek_hole": false, 00:24:26.630 "seek_data": false, 00:24:26.630 "copy": true, 00:24:26.630 "nvme_iov_md": false 00:24:26.630 }, 00:24:26.630 "memory_domains": [ 00:24:26.630 { 00:24:26.630 "dma_device_id": "system", 00:24:26.630 "dma_device_type": 1 00:24:26.630 }, 00:24:26.630 { 00:24:26.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.630 "dma_device_type": 2 00:24:26.630 } 00:24:26.630 ], 00:24:26.630 "driver_specific": {} 00:24:26.630 }' 00:24:26.630 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:26.889 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:26.889 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:26.889 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:26.889 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:26.889 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:26.889 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:26.889 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:27.148 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:27.148 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:27.148 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:27.148 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:27.148 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:27.408 [2024-07-25 00:50:49.818290] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:27.408 [2024-07-25 00:50:49.818442] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:27.408 [2024-07-25 00:50:49.818609] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:27.408 [2024-07-25 00:50:49.818719] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:27.408 [2024-07-25 00:50:49.818903] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:24:27.408 00:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 136027 00:24:27.408 00:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 136027 ']' 00:24:27.408 00:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 136027 00:24:27.408 00:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:24:27.408 00:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.408 00:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 136027 00:24:27.408 00:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:27.408 00:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:27.408 00:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 136027' 00:24:27.408 killing process with pid 136027 00:24:27.408 00:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 136027 00:24:27.408 [2024-07-25 00:50:49.868017] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:27.408 00:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 136027 00:24:27.977 [2024-07-25 00:50:50.328003] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:29.357 00:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:29.357 00:24:29.357 real 0m32.185s 00:24:29.357 user 0m57.596s 00:24:29.357 sys 0m5.088s 00:24:29.357 00:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:29.357 00:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.357 ************************************ 00:24:29.357 END TEST raid_state_function_test_sb 00:24:29.357 ************************************ 00:24:29.357 00:50:51 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:24:29.357 00:50:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:24:29.357 00:50:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.357 00:50:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:29.357 ************************************ 00:24:29.357 START TEST raid_superblock_test 00:24:29.357 ************************************ 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=137102 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 137102 /var/tmp/spdk-raid.sock 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 137102 ']' 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:29.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.357 00:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.357 [2024-07-25 00:50:51.992334] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:24:29.357 [2024-07-25 00:50:51.992790] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137102 ] 00:24:29.616 [2024-07-25 00:50:52.176945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.876 [2024-07-25 00:50:52.368219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.135 [2024-07-25 00:50:52.566865] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:30.394 00:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.394 00:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:24:30.394 00:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:24:30.394 00:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:30.394 00:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:24:30.394 00:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:24:30.394 00:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:30.394 00:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:30.394 00:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:30.394 00:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:30.394 00:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:30.653 malloc1 00:24:30.653 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:30.653 [2024-07-25 00:50:53.259540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:30.653 [2024-07-25 00:50:53.259818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.653 [2024-07-25 00:50:53.259897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:30.653 [2024-07-25 00:50:53.260005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.653 [2024-07-25 00:50:53.262437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.653 [2024-07-25 00:50:53.262616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:30.653 pt1 00:24:30.653 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:30.653 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:30.653 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:24:30.653 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:24:30.653 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:30.653 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:30.653 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:30.653 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:30.653 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:31.235 malloc2 00:24:31.235 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:31.235 [2024-07-25 00:50:53.781716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:31.235 [2024-07-25 00:50:53.782031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.235 [2024-07-25 00:50:53.782107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:31.235 [2024-07-25 00:50:53.782210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.235 [2024-07-25 00:50:53.784901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.235 [2024-07-25 00:50:53.785058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:31.235 pt2 00:24:31.235 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:31.235 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:31.235 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:24:31.235 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:24:31.235 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:31.235 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:31.235 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:31.235 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:31.235 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:31.493 malloc3 00:24:31.493 00:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:31.751 [2024-07-25 00:50:54.157673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:31.751 [2024-07-25 00:50:54.157904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.751 [2024-07-25 00:50:54.157972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:31.751 [2024-07-25 00:50:54.158073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.751 [2024-07-25 00:50:54.160700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.751 [2024-07-25 00:50:54.160856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:31.751 pt3 00:24:31.751 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:31.751 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:31.751 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:24:31.751 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:24:31.751 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:31.751 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:31.751 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:31.751 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:31.751 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:31.751 malloc4 00:24:31.751 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:32.013 [2024-07-25 00:50:54.622129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:32.013 [2024-07-25 00:50:54.622390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.013 [2024-07-25 00:50:54.622460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:32.013 [2024-07-25 00:50:54.622555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.013 [2024-07-25 00:50:54.625165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.013 [2024-07-25 00:50:54.625327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:32.013 pt4 00:24:32.013 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:32.013 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:32.013 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:32.298 [2024-07-25 00:50:54.794172] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:32.298 [2024-07-25 00:50:54.796551] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:32.298 [2024-07-25 00:50:54.796742] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:32.298 [2024-07-25 00:50:54.796855] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:32.298 [2024-07-25 00:50:54.797156] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:32.298 [2024-07-25 00:50:54.797266] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:32.298 [2024-07-25 00:50:54.797473] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:32.298 [2024-07-25 00:50:54.797851] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:32.298 [2024-07-25 00:50:54.797890] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:24:32.298 [2024-07-25 00:50:54.798158] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.298 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.558 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:32.558 "name": "raid_bdev1", 00:24:32.558 "uuid": "5b26e049-ba62-4b15-ad32-4d1541d81551", 00:24:32.558 "strip_size_kb": 64, 00:24:32.558 "state": "online", 00:24:32.558 "raid_level": "raid0", 00:24:32.558 "superblock": true, 00:24:32.558 "num_base_bdevs": 4, 00:24:32.558 "num_base_bdevs_discovered": 4, 00:24:32.558 "num_base_bdevs_operational": 4, 00:24:32.558 "base_bdevs_list": [ 00:24:32.558 { 00:24:32.558 "name": "pt1", 00:24:32.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:32.558 "is_configured": true, 00:24:32.558 "data_offset": 2048, 00:24:32.558 "data_size": 63488 00:24:32.558 }, 00:24:32.558 { 00:24:32.558 "name": "pt2", 00:24:32.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:32.558 "is_configured": true, 00:24:32.558 "data_offset": 2048, 00:24:32.558 "data_size": 63488 00:24:32.559 }, 00:24:32.559 { 00:24:32.559 "name": "pt3", 00:24:32.559 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:32.559 "is_configured": true, 00:24:32.559 "data_offset": 2048, 00:24:32.559 "data_size": 63488 00:24:32.559 }, 00:24:32.559 { 00:24:32.559 "name": "pt4", 00:24:32.559 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:32.559 "is_configured": true, 00:24:32.559 "data_offset": 2048, 00:24:32.559 "data_size": 63488 00:24:32.559 } 00:24:32.559 ] 00:24:32.559 }' 00:24:32.559 00:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:32.559 00:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.126 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:24:33.126 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:33.126 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:33.126 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:33.126 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:33.126 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:33.126 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:33.126 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:33.126 [2024-07-25 00:50:55.658650] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.126 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:33.126 "name": "raid_bdev1", 00:24:33.126 "aliases": [ 00:24:33.126 "5b26e049-ba62-4b15-ad32-4d1541d81551" 00:24:33.126 ], 00:24:33.126 "product_name": "Raid Volume", 00:24:33.126 "block_size": 512, 00:24:33.126 "num_blocks": 253952, 00:24:33.126 "uuid": "5b26e049-ba62-4b15-ad32-4d1541d81551", 00:24:33.126 "assigned_rate_limits": { 00:24:33.126 "rw_ios_per_sec": 0, 00:24:33.126 "rw_mbytes_per_sec": 0, 00:24:33.126 "r_mbytes_per_sec": 0, 00:24:33.126 "w_mbytes_per_sec": 0 00:24:33.126 }, 00:24:33.126 "claimed": false, 00:24:33.126 "zoned": false, 00:24:33.126 "supported_io_types": { 00:24:33.126 "read": true, 00:24:33.126 "write": true, 00:24:33.126 "unmap": true, 00:24:33.126 "flush": true, 00:24:33.126 "reset": true, 00:24:33.126 "nvme_admin": false, 00:24:33.126 "nvme_io": false, 00:24:33.126 "nvme_io_md": false, 00:24:33.126 "write_zeroes": true, 00:24:33.126 "zcopy": false, 00:24:33.126 "get_zone_info": false, 00:24:33.126 "zone_management": false, 00:24:33.126 "zone_append": false, 00:24:33.126 "compare": false, 00:24:33.126 "compare_and_write": false, 00:24:33.126 "abort": false, 00:24:33.126 "seek_hole": false, 00:24:33.126 "seek_data": false, 00:24:33.126 "copy": false, 00:24:33.126 "nvme_iov_md": false 00:24:33.126 }, 00:24:33.126 "memory_domains": [ 00:24:33.126 { 00:24:33.126 "dma_device_id": "system", 00:24:33.126 "dma_device_type": 1 00:24:33.126 }, 00:24:33.126 { 00:24:33.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.126 "dma_device_type": 2 00:24:33.126 }, 00:24:33.126 { 00:24:33.126 "dma_device_id": "system", 00:24:33.126 "dma_device_type": 1 00:24:33.126 }, 00:24:33.126 { 00:24:33.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.126 "dma_device_type": 2 00:24:33.126 }, 00:24:33.126 { 00:24:33.126 "dma_device_id": "system", 00:24:33.126 "dma_device_type": 1 00:24:33.126 }, 00:24:33.126 { 00:24:33.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.126 "dma_device_type": 2 00:24:33.126 }, 00:24:33.126 { 00:24:33.126 "dma_device_id": "system", 00:24:33.126 "dma_device_type": 1 00:24:33.126 }, 00:24:33.126 { 00:24:33.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.126 "dma_device_type": 2 00:24:33.126 } 00:24:33.126 ], 00:24:33.126 "driver_specific": { 00:24:33.126 "raid": { 00:24:33.126 "uuid": "5b26e049-ba62-4b15-ad32-4d1541d81551", 00:24:33.126 "strip_size_kb": 64, 00:24:33.126 "state": "online", 00:24:33.126 "raid_level": "raid0", 00:24:33.126 "superblock": true, 00:24:33.126 "num_base_bdevs": 4, 00:24:33.126 "num_base_bdevs_discovered": 4, 00:24:33.126 "num_base_bdevs_operational": 4, 00:24:33.126 "base_bdevs_list": [ 00:24:33.126 { 00:24:33.126 "name": "pt1", 00:24:33.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:33.126 "is_configured": true, 00:24:33.126 "data_offset": 2048, 00:24:33.126 "data_size": 63488 00:24:33.126 }, 00:24:33.126 { 00:24:33.126 "name": "pt2", 00:24:33.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:33.126 "is_configured": true, 00:24:33.126 "data_offset": 2048, 00:24:33.126 "data_size": 63488 00:24:33.126 }, 00:24:33.126 { 00:24:33.126 "name": "pt3", 00:24:33.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:33.126 "is_configured": true, 00:24:33.126 "data_offset": 2048, 00:24:33.126 "data_size": 63488 00:24:33.126 }, 00:24:33.126 { 00:24:33.126 "name": "pt4", 00:24:33.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:33.126 "is_configured": true, 00:24:33.126 "data_offset": 2048, 00:24:33.126 "data_size": 63488 00:24:33.126 } 00:24:33.126 ] 00:24:33.126 } 00:24:33.126 } 00:24:33.126 }' 00:24:33.126 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:33.126 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:33.127 pt2 00:24:33.127 pt3 00:24:33.127 pt4' 00:24:33.127 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:33.127 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:33.127 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:33.385 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:33.386 "name": "pt1", 00:24:33.386 "aliases": [ 00:24:33.386 "00000000-0000-0000-0000-000000000001" 00:24:33.386 ], 00:24:33.386 "product_name": "passthru", 00:24:33.386 "block_size": 512, 00:24:33.386 "num_blocks": 65536, 00:24:33.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:33.386 "assigned_rate_limits": { 00:24:33.386 "rw_ios_per_sec": 0, 00:24:33.386 "rw_mbytes_per_sec": 0, 00:24:33.386 "r_mbytes_per_sec": 0, 00:24:33.386 "w_mbytes_per_sec": 0 00:24:33.386 }, 00:24:33.386 "claimed": true, 00:24:33.386 "claim_type": "exclusive_write", 00:24:33.386 "zoned": false, 00:24:33.386 "supported_io_types": { 00:24:33.386 "read": true, 00:24:33.386 "write": true, 00:24:33.386 "unmap": true, 00:24:33.386 "flush": true, 00:24:33.386 "reset": true, 00:24:33.386 "nvme_admin": false, 00:24:33.386 "nvme_io": false, 00:24:33.386 "nvme_io_md": false, 00:24:33.386 "write_zeroes": true, 00:24:33.386 "zcopy": true, 00:24:33.386 "get_zone_info": false, 00:24:33.386 "zone_management": false, 00:24:33.386 "zone_append": false, 00:24:33.386 "compare": false, 00:24:33.386 "compare_and_write": false, 00:24:33.386 "abort": true, 00:24:33.386 "seek_hole": false, 00:24:33.386 "seek_data": false, 00:24:33.386 "copy": true, 00:24:33.386 "nvme_iov_md": false 00:24:33.386 }, 00:24:33.386 "memory_domains": [ 00:24:33.386 { 00:24:33.386 "dma_device_id": "system", 00:24:33.386 "dma_device_type": 1 00:24:33.386 }, 00:24:33.386 { 00:24:33.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.386 "dma_device_type": 2 00:24:33.386 } 00:24:33.386 ], 00:24:33.386 "driver_specific": { 00:24:33.386 "passthru": { 00:24:33.386 "name": "pt1", 00:24:33.386 "base_bdev_name": "malloc1" 00:24:33.386 } 00:24:33.386 } 00:24:33.386 }' 00:24:33.386 00:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:33.386 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:33.645 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:33.645 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:33.645 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:33.645 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:33.645 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:33.645 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:33.645 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:33.645 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:33.645 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:33.903 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:33.904 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:33.904 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:33.904 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:33.904 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:33.904 "name": "pt2", 00:24:33.904 "aliases": [ 00:24:33.904 "00000000-0000-0000-0000-000000000002" 00:24:33.904 ], 00:24:33.904 "product_name": "passthru", 00:24:33.904 "block_size": 512, 00:24:33.904 "num_blocks": 65536, 00:24:33.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:33.904 "assigned_rate_limits": { 00:24:33.904 "rw_ios_per_sec": 0, 00:24:33.904 "rw_mbytes_per_sec": 0, 00:24:33.904 "r_mbytes_per_sec": 0, 00:24:33.904 "w_mbytes_per_sec": 0 00:24:33.904 }, 00:24:33.904 "claimed": true, 00:24:33.904 "claim_type": "exclusive_write", 00:24:33.904 "zoned": false, 00:24:33.904 "supported_io_types": { 00:24:33.904 "read": true, 00:24:33.904 "write": true, 00:24:33.904 "unmap": true, 00:24:33.904 "flush": true, 00:24:33.904 "reset": true, 00:24:33.904 "nvme_admin": false, 00:24:33.904 "nvme_io": false, 00:24:33.904 "nvme_io_md": false, 00:24:33.904 "write_zeroes": true, 00:24:33.904 "zcopy": true, 00:24:33.904 "get_zone_info": false, 00:24:33.904 "zone_management": false, 00:24:33.904 "zone_append": false, 00:24:33.904 "compare": false, 00:24:33.904 "compare_and_write": false, 00:24:33.904 "abort": true, 00:24:33.904 "seek_hole": false, 00:24:33.904 "seek_data": false, 00:24:33.904 "copy": true, 00:24:33.904 "nvme_iov_md": false 00:24:33.904 }, 00:24:33.904 "memory_domains": [ 00:24:33.904 { 00:24:33.904 "dma_device_id": "system", 00:24:33.904 "dma_device_type": 1 00:24:33.904 }, 00:24:33.904 { 00:24:33.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.904 "dma_device_type": 2 00:24:33.904 } 00:24:33.904 ], 00:24:33.904 "driver_specific": { 00:24:33.904 "passthru": { 00:24:33.904 "name": "pt2", 00:24:33.904 "base_bdev_name": "malloc2" 00:24:33.904 } 00:24:33.904 } 00:24:33.904 }' 00:24:33.904 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:33.904 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:34.162 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:34.162 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.162 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.162 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:34.162 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:34.162 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:34.162 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:34.163 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:34.421 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:34.421 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:34.421 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:34.421 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:34.421 00:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:34.421 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:34.421 "name": "pt3", 00:24:34.421 "aliases": [ 00:24:34.421 "00000000-0000-0000-0000-000000000003" 00:24:34.421 ], 00:24:34.421 "product_name": "passthru", 00:24:34.421 "block_size": 512, 00:24:34.421 "num_blocks": 65536, 00:24:34.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:34.421 "assigned_rate_limits": { 00:24:34.421 "rw_ios_per_sec": 0, 00:24:34.421 "rw_mbytes_per_sec": 0, 00:24:34.421 "r_mbytes_per_sec": 0, 00:24:34.421 "w_mbytes_per_sec": 0 00:24:34.421 }, 00:24:34.421 "claimed": true, 00:24:34.421 "claim_type": "exclusive_write", 00:24:34.421 "zoned": false, 00:24:34.421 "supported_io_types": { 00:24:34.421 "read": true, 00:24:34.421 "write": true, 00:24:34.421 "unmap": true, 00:24:34.421 "flush": true, 00:24:34.421 "reset": true, 00:24:34.421 "nvme_admin": false, 00:24:34.421 "nvme_io": false, 00:24:34.421 "nvme_io_md": false, 00:24:34.421 "write_zeroes": true, 00:24:34.421 "zcopy": true, 00:24:34.421 "get_zone_info": false, 00:24:34.421 "zone_management": false, 00:24:34.421 "zone_append": false, 00:24:34.421 "compare": false, 00:24:34.421 "compare_and_write": false, 00:24:34.421 "abort": true, 00:24:34.421 "seek_hole": false, 00:24:34.421 "seek_data": false, 00:24:34.421 "copy": true, 00:24:34.421 "nvme_iov_md": false 00:24:34.421 }, 00:24:34.421 "memory_domains": [ 00:24:34.421 { 00:24:34.421 "dma_device_id": "system", 00:24:34.421 "dma_device_type": 1 00:24:34.421 }, 00:24:34.421 { 00:24:34.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.421 "dma_device_type": 2 00:24:34.421 } 00:24:34.421 ], 00:24:34.421 "driver_specific": { 00:24:34.421 "passthru": { 00:24:34.421 "name": "pt3", 00:24:34.421 "base_bdev_name": "malloc3" 00:24:34.421 } 00:24:34.421 } 00:24:34.421 }' 00:24:34.421 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:34.681 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:34.681 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:34.681 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.681 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.681 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:34.681 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:34.681 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:34.681 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:34.681 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:34.939 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:34.939 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:34.939 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:34.939 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:34.939 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:35.197 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:35.197 "name": "pt4", 00:24:35.197 "aliases": [ 00:24:35.197 "00000000-0000-0000-0000-000000000004" 00:24:35.197 ], 00:24:35.197 "product_name": "passthru", 00:24:35.197 "block_size": 512, 00:24:35.197 "num_blocks": 65536, 00:24:35.197 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:35.197 "assigned_rate_limits": { 00:24:35.197 "rw_ios_per_sec": 0, 00:24:35.197 "rw_mbytes_per_sec": 0, 00:24:35.197 "r_mbytes_per_sec": 0, 00:24:35.197 "w_mbytes_per_sec": 0 00:24:35.197 }, 00:24:35.197 "claimed": true, 00:24:35.197 "claim_type": "exclusive_write", 00:24:35.197 "zoned": false, 00:24:35.197 "supported_io_types": { 00:24:35.197 "read": true, 00:24:35.197 "write": true, 00:24:35.197 "unmap": true, 00:24:35.197 "flush": true, 00:24:35.197 "reset": true, 00:24:35.197 "nvme_admin": false, 00:24:35.197 "nvme_io": false, 00:24:35.197 "nvme_io_md": false, 00:24:35.198 "write_zeroes": true, 00:24:35.198 "zcopy": true, 00:24:35.198 "get_zone_info": false, 00:24:35.198 "zone_management": false, 00:24:35.198 "zone_append": false, 00:24:35.198 "compare": false, 00:24:35.198 "compare_and_write": false, 00:24:35.198 "abort": true, 00:24:35.198 "seek_hole": false, 00:24:35.198 "seek_data": false, 00:24:35.198 "copy": true, 00:24:35.198 "nvme_iov_md": false 00:24:35.198 }, 00:24:35.198 "memory_domains": [ 00:24:35.198 { 00:24:35.198 "dma_device_id": "system", 00:24:35.198 "dma_device_type": 1 00:24:35.198 }, 00:24:35.198 { 00:24:35.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.198 "dma_device_type": 2 00:24:35.198 } 00:24:35.198 ], 00:24:35.198 "driver_specific": { 00:24:35.198 "passthru": { 00:24:35.198 "name": "pt4", 00:24:35.198 "base_bdev_name": "malloc4" 00:24:35.198 } 00:24:35.198 } 00:24:35.198 }' 00:24:35.198 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:35.198 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:35.198 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:35.198 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:35.198 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:35.198 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:35.198 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:35.198 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:35.456 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:35.456 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:35.456 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:35.456 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:35.456 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:35.456 00:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:24:35.714 [2024-07-25 00:50:58.167441] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:35.714 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=5b26e049-ba62-4b15-ad32-4d1541d81551 00:24:35.714 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 5b26e049-ba62-4b15-ad32-4d1541d81551 ']' 00:24:35.714 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:35.972 [2024-07-25 00:50:58.435245] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:35.972 [2024-07-25 00:50:58.435394] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:35.972 [2024-07-25 00:50:58.435607] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:35.972 [2024-07-25 00:50:58.435715] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:35.972 [2024-07-25 00:50:58.435887] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:24:35.972 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:24:35.972 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.972 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:24:35.972 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:24:35.972 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:35.972 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:36.230 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:36.230 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:36.489 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:36.489 00:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:36.489 00:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:36.489 00:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:36.748 00:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:36.748 00:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:37.006 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:37.264 [2024-07-25 00:50:59.683451] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:37.264 [2024-07-25 00:50:59.685960] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:37.264 [2024-07-25 00:50:59.686138] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:37.264 [2024-07-25 00:50:59.686206] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:37.264 [2024-07-25 00:50:59.686355] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:37.264 [2024-07-25 00:50:59.686489] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:37.264 [2024-07-25 00:50:59.686637] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:37.264 [2024-07-25 00:50:59.686798] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:37.264 [2024-07-25 00:50:59.686856] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:37.264 [2024-07-25 00:50:59.686950] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:24:37.264 request: 00:24:37.264 { 00:24:37.264 "name": "raid_bdev1", 00:24:37.264 "raid_level": "raid0", 00:24:37.264 "base_bdevs": [ 00:24:37.264 "malloc1", 00:24:37.264 "malloc2", 00:24:37.264 "malloc3", 00:24:37.264 "malloc4" 00:24:37.264 ], 00:24:37.264 "strip_size_kb": 64, 00:24:37.264 "superblock": false, 00:24:37.264 "method": "bdev_raid_create", 00:24:37.264 "req_id": 1 00:24:37.264 } 00:24:37.264 Got JSON-RPC error response 00:24:37.264 response: 00:24:37.264 { 00:24:37.264 "code": -17, 00:24:37.264 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:37.264 } 00:24:37.264 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:24:37.264 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:37.264 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:37.264 00:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:37.264 00:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.264 00:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:24:37.522 00:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:24:37.522 00:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:24:37.522 00:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:37.522 [2024-07-25 00:51:00.123622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:37.522 [2024-07-25 00:51:00.123912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.522 [2024-07-25 00:51:00.123981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:37.522 [2024-07-25 00:51:00.124104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.522 [2024-07-25 00:51:00.126932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.522 [2024-07-25 00:51:00.127092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:37.522 [2024-07-25 00:51:00.127312] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:37.522 [2024-07-25 00:51:00.127460] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:37.522 pt1 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.522 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.781 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.781 "name": "raid_bdev1", 00:24:37.781 "uuid": "5b26e049-ba62-4b15-ad32-4d1541d81551", 00:24:37.781 "strip_size_kb": 64, 00:24:37.781 "state": "configuring", 00:24:37.781 "raid_level": "raid0", 00:24:37.781 "superblock": true, 00:24:37.781 "num_base_bdevs": 4, 00:24:37.781 "num_base_bdevs_discovered": 1, 00:24:37.781 "num_base_bdevs_operational": 4, 00:24:37.781 "base_bdevs_list": [ 00:24:37.781 { 00:24:37.781 "name": "pt1", 00:24:37.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:37.781 "is_configured": true, 00:24:37.781 "data_offset": 2048, 00:24:37.781 "data_size": 63488 00:24:37.781 }, 00:24:37.781 { 00:24:37.781 "name": null, 00:24:37.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:37.781 "is_configured": false, 00:24:37.781 "data_offset": 2048, 00:24:37.781 "data_size": 63488 00:24:37.781 }, 00:24:37.781 { 00:24:37.781 "name": null, 00:24:37.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:37.781 "is_configured": false, 00:24:37.781 "data_offset": 2048, 00:24:37.781 "data_size": 63488 00:24:37.781 }, 00:24:37.781 { 00:24:37.781 "name": null, 00:24:37.781 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:37.781 "is_configured": false, 00:24:37.781 "data_offset": 2048, 00:24:37.781 "data_size": 63488 00:24:37.781 } 00:24:37.781 ] 00:24:37.781 }' 00:24:37.781 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.781 00:51:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.348 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:24:38.348 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:38.348 [2024-07-25 00:51:00.968531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:38.348 [2024-07-25 00:51:00.968733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.348 [2024-07-25 00:51:00.968810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:38.348 [2024-07-25 00:51:00.968907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.348 [2024-07-25 00:51:00.969499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.348 [2024-07-25 00:51:00.969631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:38.348 [2024-07-25 00:51:00.969807] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:38.348 [2024-07-25 00:51:00.969895] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:38.348 pt2 00:24:38.348 00:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:38.606 [2024-07-25 00:51:01.140577] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:38.606 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:38.607 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:38.607 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:38.607 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:38.607 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:38.607 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:38.607 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:38.607 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:38.607 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:38.607 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:38.607 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.607 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.865 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:38.865 "name": "raid_bdev1", 00:24:38.865 "uuid": "5b26e049-ba62-4b15-ad32-4d1541d81551", 00:24:38.865 "strip_size_kb": 64, 00:24:38.865 "state": "configuring", 00:24:38.865 "raid_level": "raid0", 00:24:38.865 "superblock": true, 00:24:38.865 "num_base_bdevs": 4, 00:24:38.865 "num_base_bdevs_discovered": 1, 00:24:38.865 "num_base_bdevs_operational": 4, 00:24:38.865 "base_bdevs_list": [ 00:24:38.865 { 00:24:38.865 "name": "pt1", 00:24:38.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:38.865 "is_configured": true, 00:24:38.865 "data_offset": 2048, 00:24:38.865 "data_size": 63488 00:24:38.865 }, 00:24:38.865 { 00:24:38.865 "name": null, 00:24:38.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:38.865 "is_configured": false, 00:24:38.865 "data_offset": 2048, 00:24:38.865 "data_size": 63488 00:24:38.865 }, 00:24:38.865 { 00:24:38.865 "name": null, 00:24:38.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:38.865 "is_configured": false, 00:24:38.865 "data_offset": 2048, 00:24:38.865 "data_size": 63488 00:24:38.865 }, 00:24:38.865 { 00:24:38.865 "name": null, 00:24:38.865 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:38.865 "is_configured": false, 00:24:38.865 "data_offset": 2048, 00:24:38.865 "data_size": 63488 00:24:38.865 } 00:24:38.865 ] 00:24:38.865 }' 00:24:38.865 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:38.865 00:51:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.433 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:24:39.433 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:39.433 00:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:39.692 [2024-07-25 00:51:02.088755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:39.692 [2024-07-25 00:51:02.088980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.692 [2024-07-25 00:51:02.089055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:39.692 [2024-07-25 00:51:02.089183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.692 [2024-07-25 00:51:02.089773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.692 [2024-07-25 00:51:02.089930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:39.692 [2024-07-25 00:51:02.090126] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:39.692 [2024-07-25 00:51:02.090243] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:39.692 pt2 00:24:39.692 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:39.692 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:39.692 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:39.951 [2024-07-25 00:51:02.344804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:39.951 [2024-07-25 00:51:02.344976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.951 [2024-07-25 00:51:02.345034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:39.951 [2024-07-25 00:51:02.345214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.951 [2024-07-25 00:51:02.345718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.951 [2024-07-25 00:51:02.345860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:39.951 [2024-07-25 00:51:02.346029] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:39.951 [2024-07-25 00:51:02.346075] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:39.951 pt3 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:39.951 [2024-07-25 00:51:02.516800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:39.951 [2024-07-25 00:51:02.516969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.951 [2024-07-25 00:51:02.517027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:39.951 [2024-07-25 00:51:02.517197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.951 [2024-07-25 00:51:02.517669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.951 [2024-07-25 00:51:02.517819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:39.951 [2024-07-25 00:51:02.517987] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:39.951 [2024-07-25 00:51:02.518042] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:39.951 [2024-07-25 00:51:02.518513] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:24:39.951 [2024-07-25 00:51:02.518598] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:39.951 [2024-07-25 00:51:02.518733] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:39.951 [2024-07-25 00:51:02.519149] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:24:39.951 [2024-07-25 00:51:02.519266] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:24:39.951 [2024-07-25 00:51:02.519468] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.951 pt4 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.951 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.210 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.210 "name": "raid_bdev1", 00:24:40.210 "uuid": "5b26e049-ba62-4b15-ad32-4d1541d81551", 00:24:40.210 "strip_size_kb": 64, 00:24:40.210 "state": "online", 00:24:40.211 "raid_level": "raid0", 00:24:40.211 "superblock": true, 00:24:40.211 "num_base_bdevs": 4, 00:24:40.211 "num_base_bdevs_discovered": 4, 00:24:40.211 "num_base_bdevs_operational": 4, 00:24:40.211 "base_bdevs_list": [ 00:24:40.211 { 00:24:40.211 "name": "pt1", 00:24:40.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:40.211 "is_configured": true, 00:24:40.211 "data_offset": 2048, 00:24:40.211 "data_size": 63488 00:24:40.211 }, 00:24:40.211 { 00:24:40.211 "name": "pt2", 00:24:40.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:40.211 "is_configured": true, 00:24:40.211 "data_offset": 2048, 00:24:40.211 "data_size": 63488 00:24:40.211 }, 00:24:40.211 { 00:24:40.211 "name": "pt3", 00:24:40.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:40.211 "is_configured": true, 00:24:40.211 "data_offset": 2048, 00:24:40.211 "data_size": 63488 00:24:40.211 }, 00:24:40.211 { 00:24:40.211 "name": "pt4", 00:24:40.211 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:40.211 "is_configured": true, 00:24:40.211 "data_offset": 2048, 00:24:40.211 "data_size": 63488 00:24:40.211 } 00:24:40.211 ] 00:24:40.211 }' 00:24:40.211 00:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.211 00:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.780 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:24:40.780 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:40.780 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:40.780 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:40.780 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:40.780 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:40.780 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:40.780 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:41.038 [2024-07-25 00:51:03.529310] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:41.038 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:41.038 "name": "raid_bdev1", 00:24:41.038 "aliases": [ 00:24:41.038 "5b26e049-ba62-4b15-ad32-4d1541d81551" 00:24:41.038 ], 00:24:41.038 "product_name": "Raid Volume", 00:24:41.038 "block_size": 512, 00:24:41.038 "num_blocks": 253952, 00:24:41.038 "uuid": "5b26e049-ba62-4b15-ad32-4d1541d81551", 00:24:41.038 "assigned_rate_limits": { 00:24:41.038 "rw_ios_per_sec": 0, 00:24:41.038 "rw_mbytes_per_sec": 0, 00:24:41.038 "r_mbytes_per_sec": 0, 00:24:41.038 "w_mbytes_per_sec": 0 00:24:41.038 }, 00:24:41.038 "claimed": false, 00:24:41.038 "zoned": false, 00:24:41.038 "supported_io_types": { 00:24:41.038 "read": true, 00:24:41.038 "write": true, 00:24:41.038 "unmap": true, 00:24:41.038 "flush": true, 00:24:41.038 "reset": true, 00:24:41.038 "nvme_admin": false, 00:24:41.038 "nvme_io": false, 00:24:41.038 "nvme_io_md": false, 00:24:41.038 "write_zeroes": true, 00:24:41.038 "zcopy": false, 00:24:41.038 "get_zone_info": false, 00:24:41.038 "zone_management": false, 00:24:41.038 "zone_append": false, 00:24:41.038 "compare": false, 00:24:41.038 "compare_and_write": false, 00:24:41.038 "abort": false, 00:24:41.038 "seek_hole": false, 00:24:41.038 "seek_data": false, 00:24:41.038 "copy": false, 00:24:41.038 "nvme_iov_md": false 00:24:41.038 }, 00:24:41.038 "memory_domains": [ 00:24:41.038 { 00:24:41.038 "dma_device_id": "system", 00:24:41.038 "dma_device_type": 1 00:24:41.038 }, 00:24:41.038 { 00:24:41.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.039 "dma_device_type": 2 00:24:41.039 }, 00:24:41.039 { 00:24:41.039 "dma_device_id": "system", 00:24:41.039 "dma_device_type": 1 00:24:41.039 }, 00:24:41.039 { 00:24:41.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.039 "dma_device_type": 2 00:24:41.039 }, 00:24:41.039 { 00:24:41.039 "dma_device_id": "system", 00:24:41.039 "dma_device_type": 1 00:24:41.039 }, 00:24:41.039 { 00:24:41.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.039 "dma_device_type": 2 00:24:41.039 }, 00:24:41.039 { 00:24:41.039 "dma_device_id": "system", 00:24:41.039 "dma_device_type": 1 00:24:41.039 }, 00:24:41.039 { 00:24:41.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.039 "dma_device_type": 2 00:24:41.039 } 00:24:41.039 ], 00:24:41.039 "driver_specific": { 00:24:41.039 "raid": { 00:24:41.039 "uuid": "5b26e049-ba62-4b15-ad32-4d1541d81551", 00:24:41.039 "strip_size_kb": 64, 00:24:41.039 "state": "online", 00:24:41.039 "raid_level": "raid0", 00:24:41.039 "superblock": true, 00:24:41.039 "num_base_bdevs": 4, 00:24:41.039 "num_base_bdevs_discovered": 4, 00:24:41.039 "num_base_bdevs_operational": 4, 00:24:41.039 "base_bdevs_list": [ 00:24:41.039 { 00:24:41.039 "name": "pt1", 00:24:41.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:41.039 "is_configured": true, 00:24:41.039 "data_offset": 2048, 00:24:41.039 "data_size": 63488 00:24:41.039 }, 00:24:41.039 { 00:24:41.039 "name": "pt2", 00:24:41.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:41.039 "is_configured": true, 00:24:41.039 "data_offset": 2048, 00:24:41.039 "data_size": 63488 00:24:41.039 }, 00:24:41.039 { 00:24:41.039 "name": "pt3", 00:24:41.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:41.039 "is_configured": true, 00:24:41.039 "data_offset": 2048, 00:24:41.039 "data_size": 63488 00:24:41.039 }, 00:24:41.039 { 00:24:41.039 "name": "pt4", 00:24:41.039 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:41.039 "is_configured": true, 00:24:41.039 "data_offset": 2048, 00:24:41.039 "data_size": 63488 00:24:41.039 } 00:24:41.039 ] 00:24:41.039 } 00:24:41.039 } 00:24:41.039 }' 00:24:41.039 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:41.039 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:41.039 pt2 00:24:41.039 pt3 00:24:41.039 pt4' 00:24:41.039 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:41.039 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:41.039 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:41.297 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:41.297 "name": "pt1", 00:24:41.297 "aliases": [ 00:24:41.297 "00000000-0000-0000-0000-000000000001" 00:24:41.297 ], 00:24:41.297 "product_name": "passthru", 00:24:41.297 "block_size": 512, 00:24:41.297 "num_blocks": 65536, 00:24:41.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:41.297 "assigned_rate_limits": { 00:24:41.297 "rw_ios_per_sec": 0, 00:24:41.297 "rw_mbytes_per_sec": 0, 00:24:41.297 "r_mbytes_per_sec": 0, 00:24:41.297 "w_mbytes_per_sec": 0 00:24:41.297 }, 00:24:41.297 "claimed": true, 00:24:41.297 "claim_type": "exclusive_write", 00:24:41.297 "zoned": false, 00:24:41.297 "supported_io_types": { 00:24:41.297 "read": true, 00:24:41.297 "write": true, 00:24:41.297 "unmap": true, 00:24:41.297 "flush": true, 00:24:41.297 "reset": true, 00:24:41.297 "nvme_admin": false, 00:24:41.297 "nvme_io": false, 00:24:41.297 "nvme_io_md": false, 00:24:41.297 "write_zeroes": true, 00:24:41.297 "zcopy": true, 00:24:41.298 "get_zone_info": false, 00:24:41.298 "zone_management": false, 00:24:41.298 "zone_append": false, 00:24:41.298 "compare": false, 00:24:41.298 "compare_and_write": false, 00:24:41.298 "abort": true, 00:24:41.298 "seek_hole": false, 00:24:41.298 "seek_data": false, 00:24:41.298 "copy": true, 00:24:41.298 "nvme_iov_md": false 00:24:41.298 }, 00:24:41.298 "memory_domains": [ 00:24:41.298 { 00:24:41.298 "dma_device_id": "system", 00:24:41.298 "dma_device_type": 1 00:24:41.298 }, 00:24:41.298 { 00:24:41.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.298 "dma_device_type": 2 00:24:41.298 } 00:24:41.298 ], 00:24:41.298 "driver_specific": { 00:24:41.298 "passthru": { 00:24:41.298 "name": "pt1", 00:24:41.298 "base_bdev_name": "malloc1" 00:24:41.298 } 00:24:41.298 } 00:24:41.298 }' 00:24:41.298 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:41.298 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:41.298 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:41.298 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:41.298 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:41.298 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:41.298 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:41.556 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:41.556 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:41.556 00:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:41.556 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:41.556 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:41.556 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:41.556 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:41.556 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:41.815 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:41.816 "name": "pt2", 00:24:41.816 "aliases": [ 00:24:41.816 "00000000-0000-0000-0000-000000000002" 00:24:41.816 ], 00:24:41.816 "product_name": "passthru", 00:24:41.816 "block_size": 512, 00:24:41.816 "num_blocks": 65536, 00:24:41.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:41.816 "assigned_rate_limits": { 00:24:41.816 "rw_ios_per_sec": 0, 00:24:41.816 "rw_mbytes_per_sec": 0, 00:24:41.816 "r_mbytes_per_sec": 0, 00:24:41.816 "w_mbytes_per_sec": 0 00:24:41.816 }, 00:24:41.816 "claimed": true, 00:24:41.816 "claim_type": "exclusive_write", 00:24:41.816 "zoned": false, 00:24:41.816 "supported_io_types": { 00:24:41.816 "read": true, 00:24:41.816 "write": true, 00:24:41.816 "unmap": true, 00:24:41.816 "flush": true, 00:24:41.816 "reset": true, 00:24:41.816 "nvme_admin": false, 00:24:41.816 "nvme_io": false, 00:24:41.816 "nvme_io_md": false, 00:24:41.816 "write_zeroes": true, 00:24:41.816 "zcopy": true, 00:24:41.816 "get_zone_info": false, 00:24:41.816 "zone_management": false, 00:24:41.816 "zone_append": false, 00:24:41.816 "compare": false, 00:24:41.816 "compare_and_write": false, 00:24:41.816 "abort": true, 00:24:41.816 "seek_hole": false, 00:24:41.816 "seek_data": false, 00:24:41.816 "copy": true, 00:24:41.816 "nvme_iov_md": false 00:24:41.816 }, 00:24:41.816 "memory_domains": [ 00:24:41.816 { 00:24:41.816 "dma_device_id": "system", 00:24:41.816 "dma_device_type": 1 00:24:41.816 }, 00:24:41.816 { 00:24:41.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.816 "dma_device_type": 2 00:24:41.816 } 00:24:41.816 ], 00:24:41.816 "driver_specific": { 00:24:41.816 "passthru": { 00:24:41.816 "name": "pt2", 00:24:41.816 "base_bdev_name": "malloc2" 00:24:41.816 } 00:24:41.816 } 00:24:41.816 }' 00:24:41.816 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:41.816 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:41.816 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:41.816 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:42.075 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:42.075 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:42.075 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:42.075 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:42.075 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:42.075 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:42.075 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:42.075 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:42.075 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:42.075 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:42.075 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:42.334 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:42.334 "name": "pt3", 00:24:42.334 "aliases": [ 00:24:42.334 "00000000-0000-0000-0000-000000000003" 00:24:42.334 ], 00:24:42.334 "product_name": "passthru", 00:24:42.334 "block_size": 512, 00:24:42.334 "num_blocks": 65536, 00:24:42.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:42.334 "assigned_rate_limits": { 00:24:42.334 "rw_ios_per_sec": 0, 00:24:42.334 "rw_mbytes_per_sec": 0, 00:24:42.334 "r_mbytes_per_sec": 0, 00:24:42.334 "w_mbytes_per_sec": 0 00:24:42.334 }, 00:24:42.334 "claimed": true, 00:24:42.334 "claim_type": "exclusive_write", 00:24:42.334 "zoned": false, 00:24:42.334 "supported_io_types": { 00:24:42.334 "read": true, 00:24:42.334 "write": true, 00:24:42.334 "unmap": true, 00:24:42.334 "flush": true, 00:24:42.334 "reset": true, 00:24:42.334 "nvme_admin": false, 00:24:42.334 "nvme_io": false, 00:24:42.334 "nvme_io_md": false, 00:24:42.334 "write_zeroes": true, 00:24:42.334 "zcopy": true, 00:24:42.334 "get_zone_info": false, 00:24:42.334 "zone_management": false, 00:24:42.334 "zone_append": false, 00:24:42.334 "compare": false, 00:24:42.334 "compare_and_write": false, 00:24:42.334 "abort": true, 00:24:42.334 "seek_hole": false, 00:24:42.334 "seek_data": false, 00:24:42.334 "copy": true, 00:24:42.334 "nvme_iov_md": false 00:24:42.334 }, 00:24:42.334 "memory_domains": [ 00:24:42.334 { 00:24:42.334 "dma_device_id": "system", 00:24:42.334 "dma_device_type": 1 00:24:42.334 }, 00:24:42.334 { 00:24:42.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.334 "dma_device_type": 2 00:24:42.334 } 00:24:42.334 ], 00:24:42.334 "driver_specific": { 00:24:42.334 "passthru": { 00:24:42.334 "name": "pt3", 00:24:42.334 "base_bdev_name": "malloc3" 00:24:42.334 } 00:24:42.334 } 00:24:42.334 }' 00:24:42.334 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:42.334 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:42.334 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:42.334 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:42.334 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:42.594 00:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:42.594 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:42.594 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:42.594 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:42.594 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:42.594 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:42.594 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:42.594 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:42.594 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:42.594 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:42.853 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:42.853 "name": "pt4", 00:24:42.853 "aliases": [ 00:24:42.853 "00000000-0000-0000-0000-000000000004" 00:24:42.853 ], 00:24:42.853 "product_name": "passthru", 00:24:42.853 "block_size": 512, 00:24:42.853 "num_blocks": 65536, 00:24:42.853 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:42.853 "assigned_rate_limits": { 00:24:42.853 "rw_ios_per_sec": 0, 00:24:42.853 "rw_mbytes_per_sec": 0, 00:24:42.853 "r_mbytes_per_sec": 0, 00:24:42.853 "w_mbytes_per_sec": 0 00:24:42.853 }, 00:24:42.853 "claimed": true, 00:24:42.853 "claim_type": "exclusive_write", 00:24:42.853 "zoned": false, 00:24:42.853 "supported_io_types": { 00:24:42.853 "read": true, 00:24:42.853 "write": true, 00:24:42.853 "unmap": true, 00:24:42.853 "flush": true, 00:24:42.853 "reset": true, 00:24:42.853 "nvme_admin": false, 00:24:42.853 "nvme_io": false, 00:24:42.853 "nvme_io_md": false, 00:24:42.853 "write_zeroes": true, 00:24:42.853 "zcopy": true, 00:24:42.853 "get_zone_info": false, 00:24:42.853 "zone_management": false, 00:24:42.853 "zone_append": false, 00:24:42.853 "compare": false, 00:24:42.853 "compare_and_write": false, 00:24:42.853 "abort": true, 00:24:42.853 "seek_hole": false, 00:24:42.853 "seek_data": false, 00:24:42.853 "copy": true, 00:24:42.853 "nvme_iov_md": false 00:24:42.853 }, 00:24:42.853 "memory_domains": [ 00:24:42.853 { 00:24:42.853 "dma_device_id": "system", 00:24:42.853 "dma_device_type": 1 00:24:42.853 }, 00:24:42.853 { 00:24:42.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.853 "dma_device_type": 2 00:24:42.853 } 00:24:42.853 ], 00:24:42.853 "driver_specific": { 00:24:42.853 "passthru": { 00:24:42.853 "name": "pt4", 00:24:42.853 "base_bdev_name": "malloc4" 00:24:42.853 } 00:24:42.853 } 00:24:42.853 }' 00:24:42.853 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:42.853 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:42.853 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:42.853 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:42.853 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:43.112 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:43.112 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:43.112 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:43.112 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:43.112 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:43.112 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:43.112 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:43.112 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:24:43.112 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:43.372 [2024-07-25 00:51:05.933699] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 5b26e049-ba62-4b15-ad32-4d1541d81551 '!=' 5b26e049-ba62-4b15-ad32-4d1541d81551 ']' 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 137102 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 137102 ']' 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 137102 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137102 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137102' 00:24:43.372 killing process with pid 137102 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 137102 00:24:43.372 [2024-07-25 00:51:05.983639] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:43.372 00:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 137102 00:24:43.372 [2024-07-25 00:51:05.983823] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:43.372 [2024-07-25 00:51:05.983888] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:43.372 [2024-07-25 00:51:05.983897] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:24:43.940 [2024-07-25 00:51:06.383700] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:45.320 ************************************ 00:24:45.320 END TEST raid_superblock_test 00:24:45.320 ************************************ 00:24:45.320 00:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:24:45.320 00:24:45.320 real 0m15.777s 00:24:45.320 user 0m27.188s 00:24:45.320 sys 0m2.444s 00:24:45.320 00:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:45.320 00:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.320 00:51:07 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:24:45.320 00:51:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:45.320 00:51:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.320 00:51:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:45.320 ************************************ 00:24:45.320 START TEST raid_read_error_test 00:24:45.320 ************************************ 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.9cRm9jWPkp 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=137645 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 137645 /var/tmp/spdk-raid.sock 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 137645 ']' 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:45.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.320 00:51:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.320 [2024-07-25 00:51:07.858715] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:24:45.320 [2024-07-25 00:51:07.859683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137645 ] 00:24:45.580 [2024-07-25 00:51:08.049094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.580 [2024-07-25 00:51:08.223237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.839 [2024-07-25 00:51:08.412205] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:46.408 00:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.408 00:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:24:46.408 00:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:46.408 00:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:46.408 BaseBdev1_malloc 00:24:46.408 00:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:46.668 true 00:24:46.668 00:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:46.668 [2024-07-25 00:51:09.279018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:46.668 [2024-07-25 00:51:09.279110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:46.668 [2024-07-25 00:51:09.279144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:24:46.668 [2024-07-25 00:51:09.279162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:46.668 [2024-07-25 00:51:09.281337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:46.668 [2024-07-25 00:51:09.281380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:46.668 BaseBdev1 00:24:46.668 00:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:46.668 00:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:46.928 BaseBdev2_malloc 00:24:46.928 00:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:47.187 true 00:24:47.188 00:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:47.188 [2024-07-25 00:51:09.838738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:47.188 [2024-07-25 00:51:09.838833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.188 [2024-07-25 00:51:09.838870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:47.188 [2024-07-25 00:51:09.838888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.447 [2024-07-25 00:51:09.840971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.447 [2024-07-25 00:51:09.841016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:47.447 BaseBdev2 00:24:47.447 00:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:47.447 00:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:47.447 BaseBdev3_malloc 00:24:47.447 00:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:48.016 true 00:24:48.016 00:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:48.016 [2024-07-25 00:51:10.558474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:48.016 [2024-07-25 00:51:10.558554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.016 [2024-07-25 00:51:10.558586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:24:48.016 [2024-07-25 00:51:10.558615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.016 [2024-07-25 00:51:10.560718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.016 [2024-07-25 00:51:10.560769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:48.016 BaseBdev3 00:24:48.016 00:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:48.016 00:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:48.276 BaseBdev4_malloc 00:24:48.276 00:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:48.534 true 00:24:48.534 00:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:48.534 [2024-07-25 00:51:11.104744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:48.534 [2024-07-25 00:51:11.104824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.534 [2024-07-25 00:51:11.104873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:24:48.534 [2024-07-25 00:51:11.104897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.534 [2024-07-25 00:51:11.106980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.534 [2024-07-25 00:51:11.107032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:48.534 BaseBdev4 00:24:48.534 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:48.793 [2024-07-25 00:51:11.328875] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:48.793 [2024-07-25 00:51:11.330932] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:48.793 [2024-07-25 00:51:11.331008] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:48.793 [2024-07-25 00:51:11.331057] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:48.793 [2024-07-25 00:51:11.331321] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:24:48.793 [2024-07-25 00:51:11.331349] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:48.793 [2024-07-25 00:51:11.331474] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:48.793 [2024-07-25 00:51:11.331815] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:24:48.793 [2024-07-25 00:51:11.331833] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:24:48.793 [2024-07-25 00:51:11.331989] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.793 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.052 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:49.052 "name": "raid_bdev1", 00:24:49.052 "uuid": "59611008-a34f-4ec8-be2e-f51c4c8531f9", 00:24:49.052 "strip_size_kb": 64, 00:24:49.052 "state": "online", 00:24:49.052 "raid_level": "raid0", 00:24:49.052 "superblock": true, 00:24:49.052 "num_base_bdevs": 4, 00:24:49.052 "num_base_bdevs_discovered": 4, 00:24:49.052 "num_base_bdevs_operational": 4, 00:24:49.052 "base_bdevs_list": [ 00:24:49.052 { 00:24:49.052 "name": "BaseBdev1", 00:24:49.052 "uuid": "332c3ed4-0ad9-58f1-ba60-838f18ffbeaf", 00:24:49.052 "is_configured": true, 00:24:49.052 "data_offset": 2048, 00:24:49.052 "data_size": 63488 00:24:49.052 }, 00:24:49.052 { 00:24:49.052 "name": "BaseBdev2", 00:24:49.052 "uuid": "3651b4f2-9c17-5afa-bebf-0c48624d2aa3", 00:24:49.052 "is_configured": true, 00:24:49.052 "data_offset": 2048, 00:24:49.052 "data_size": 63488 00:24:49.052 }, 00:24:49.052 { 00:24:49.052 "name": "BaseBdev3", 00:24:49.052 "uuid": "f96a9346-5c07-579d-a9ec-dd798d9a5986", 00:24:49.052 "is_configured": true, 00:24:49.052 "data_offset": 2048, 00:24:49.052 "data_size": 63488 00:24:49.052 }, 00:24:49.052 { 00:24:49.052 "name": "BaseBdev4", 00:24:49.052 "uuid": "c6757363-cd76-5ab4-be51-c99c01c5fe6b", 00:24:49.052 "is_configured": true, 00:24:49.052 "data_offset": 2048, 00:24:49.052 "data_size": 63488 00:24:49.052 } 00:24:49.052 ] 00:24:49.052 }' 00:24:49.052 00:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:49.052 00:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.622 00:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:49.622 00:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:49.622 [2024-07-25 00:51:12.178226] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:50.560 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.833 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.097 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:51.097 "name": "raid_bdev1", 00:24:51.098 "uuid": "59611008-a34f-4ec8-be2e-f51c4c8531f9", 00:24:51.098 "strip_size_kb": 64, 00:24:51.098 "state": "online", 00:24:51.098 "raid_level": "raid0", 00:24:51.098 "superblock": true, 00:24:51.098 "num_base_bdevs": 4, 00:24:51.098 "num_base_bdevs_discovered": 4, 00:24:51.098 "num_base_bdevs_operational": 4, 00:24:51.098 "base_bdevs_list": [ 00:24:51.098 { 00:24:51.098 "name": "BaseBdev1", 00:24:51.098 "uuid": "332c3ed4-0ad9-58f1-ba60-838f18ffbeaf", 00:24:51.098 "is_configured": true, 00:24:51.098 "data_offset": 2048, 00:24:51.098 "data_size": 63488 00:24:51.098 }, 00:24:51.098 { 00:24:51.098 "name": "BaseBdev2", 00:24:51.098 "uuid": "3651b4f2-9c17-5afa-bebf-0c48624d2aa3", 00:24:51.098 "is_configured": true, 00:24:51.098 "data_offset": 2048, 00:24:51.098 "data_size": 63488 00:24:51.098 }, 00:24:51.098 { 00:24:51.098 "name": "BaseBdev3", 00:24:51.098 "uuid": "f96a9346-5c07-579d-a9ec-dd798d9a5986", 00:24:51.098 "is_configured": true, 00:24:51.098 "data_offset": 2048, 00:24:51.098 "data_size": 63488 00:24:51.098 }, 00:24:51.098 { 00:24:51.098 "name": "BaseBdev4", 00:24:51.098 "uuid": "c6757363-cd76-5ab4-be51-c99c01c5fe6b", 00:24:51.098 "is_configured": true, 00:24:51.098 "data_offset": 2048, 00:24:51.098 "data_size": 63488 00:24:51.098 } 00:24:51.098 ] 00:24:51.098 }' 00:24:51.098 00:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:51.098 00:51:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.699 00:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:51.975 [2024-07-25 00:51:14.422296] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:51.975 [2024-07-25 00:51:14.422358] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:51.975 [2024-07-25 00:51:14.424653] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:51.975 [2024-07-25 00:51:14.424704] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.975 [2024-07-25 00:51:14.424741] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:51.975 [2024-07-25 00:51:14.424749] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:24:51.975 0 00:24:51.975 00:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 137645 00:24:51.975 00:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 137645 ']' 00:24:51.975 00:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 137645 00:24:51.975 00:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:24:51.975 00:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.975 00:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137645 00:24:51.975 00:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:51.975 00:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:51.975 00:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137645' 00:24:51.975 killing process with pid 137645 00:24:51.975 00:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 137645 00:24:51.975 [2024-07-25 00:51:14.467752] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:51.975 00:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 137645 00:24:52.234 [2024-07-25 00:51:14.756585] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:53.612 00:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.9cRm9jWPkp 00:24:53.612 00:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:53.612 00:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:53.612 00:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:24:53.612 00:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:24:53.612 00:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:53.612 00:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:53.612 00:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:24:53.612 00:24:53.612 real 0m8.246s 00:24:53.612 user 0m12.255s 00:24:53.612 sys 0m1.091s 00:24:53.612 00:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.612 ************************************ 00:24:53.612 END TEST raid_read_error_test 00:24:53.612 ************************************ 00:24:53.612 00:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.612 00:51:16 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:24:53.612 00:51:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:53.612 00:51:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.612 00:51:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:53.612 ************************************ 00:24:53.612 START TEST raid_write_error_test 00:24:53.612 ************************************ 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:53.612 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.XtCTD1nN3f 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=137854 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 137854 /var/tmp/spdk-raid.sock 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 137854 ']' 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:53.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.613 00:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.613 [2024-07-25 00:51:16.179413] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:24:53.613 [2024-07-25 00:51:16.179644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137854 ] 00:24:53.872 [2024-07-25 00:51:16.364665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.132 [2024-07-25 00:51:16.551040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.132 [2024-07-25 00:51:16.733023] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.702 00:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.702 00:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:24:54.702 00:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:54.702 00:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:54.962 BaseBdev1_malloc 00:24:54.962 00:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:54.962 true 00:24:54.962 00:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:55.222 [2024-07-25 00:51:17.785491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:55.222 [2024-07-25 00:51:17.785573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.222 [2024-07-25 00:51:17.785612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:24:55.222 [2024-07-25 00:51:17.785633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.222 [2024-07-25 00:51:17.787796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.222 [2024-07-25 00:51:17.787839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:55.222 BaseBdev1 00:24:55.222 00:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:55.222 00:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:55.482 BaseBdev2_malloc 00:24:55.482 00:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:55.740 true 00:24:55.740 00:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:55.740 [2024-07-25 00:51:18.360252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:55.740 [2024-07-25 00:51:18.360344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.740 [2024-07-25 00:51:18.360379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:55.740 [2024-07-25 00:51:18.360398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.740 [2024-07-25 00:51:18.362510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.740 [2024-07-25 00:51:18.362555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:55.740 BaseBdev2 00:24:55.740 00:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:55.740 00:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:55.999 BaseBdev3_malloc 00:24:55.999 00:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:56.258 true 00:24:56.258 00:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:56.517 [2024-07-25 00:51:18.971506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:56.517 [2024-07-25 00:51:18.971588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.517 [2024-07-25 00:51:18.971622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:24:56.517 [2024-07-25 00:51:18.971647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.517 [2024-07-25 00:51:18.973810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.517 [2024-07-25 00:51:18.973859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:56.517 BaseBdev3 00:24:56.517 00:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:56.517 00:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:56.777 BaseBdev4_malloc 00:24:56.777 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:56.777 true 00:24:56.777 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:57.037 [2024-07-25 00:51:19.520028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:57.037 [2024-07-25 00:51:19.520102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.037 [2024-07-25 00:51:19.520150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:24:57.037 [2024-07-25 00:51:19.520173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.037 [2024-07-25 00:51:19.522285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.037 [2024-07-25 00:51:19.522331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:57.037 BaseBdev4 00:24:57.037 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:57.296 [2024-07-25 00:51:19.700106] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:57.296 [2024-07-25 00:51:19.702084] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:57.296 [2024-07-25 00:51:19.702162] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:57.297 [2024-07-25 00:51:19.702211] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:57.297 [2024-07-25 00:51:19.702468] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:24:57.297 [2024-07-25 00:51:19.702479] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:57.297 [2024-07-25 00:51:19.702582] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:57.297 [2024-07-25 00:51:19.702932] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:24:57.297 [2024-07-25 00:51:19.702951] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:24:57.297 [2024-07-25 00:51:19.703093] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:57.297 "name": "raid_bdev1", 00:24:57.297 "uuid": "86df5e5d-32b0-4bbc-ac49-e5cd03d3c71f", 00:24:57.297 "strip_size_kb": 64, 00:24:57.297 "state": "online", 00:24:57.297 "raid_level": "raid0", 00:24:57.297 "superblock": true, 00:24:57.297 "num_base_bdevs": 4, 00:24:57.297 "num_base_bdevs_discovered": 4, 00:24:57.297 "num_base_bdevs_operational": 4, 00:24:57.297 "base_bdevs_list": [ 00:24:57.297 { 00:24:57.297 "name": "BaseBdev1", 00:24:57.297 "uuid": "89d63878-93ee-5816-9690-22b4c1cbd4fe", 00:24:57.297 "is_configured": true, 00:24:57.297 "data_offset": 2048, 00:24:57.297 "data_size": 63488 00:24:57.297 }, 00:24:57.297 { 00:24:57.297 "name": "BaseBdev2", 00:24:57.297 "uuid": "28f9c098-6844-5920-8053-c32d56fb58f5", 00:24:57.297 "is_configured": true, 00:24:57.297 "data_offset": 2048, 00:24:57.297 "data_size": 63488 00:24:57.297 }, 00:24:57.297 { 00:24:57.297 "name": "BaseBdev3", 00:24:57.297 "uuid": "34d2b64a-43ec-55e0-90cf-625341a45917", 00:24:57.297 "is_configured": true, 00:24:57.297 "data_offset": 2048, 00:24:57.297 "data_size": 63488 00:24:57.297 }, 00:24:57.297 { 00:24:57.297 "name": "BaseBdev4", 00:24:57.297 "uuid": "417c9663-9f0d-5614-a266-d46f18348323", 00:24:57.297 "is_configured": true, 00:24:57.297 "data_offset": 2048, 00:24:57.297 "data_size": 63488 00:24:57.297 } 00:24:57.297 ] 00:24:57.297 }' 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:57.297 00:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.865 00:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:57.865 00:51:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:57.865 [2024-07-25 00:51:20.517471] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:58.803 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.063 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.321 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.321 "name": "raid_bdev1", 00:24:59.321 "uuid": "86df5e5d-32b0-4bbc-ac49-e5cd03d3c71f", 00:24:59.321 "strip_size_kb": 64, 00:24:59.321 "state": "online", 00:24:59.321 "raid_level": "raid0", 00:24:59.321 "superblock": true, 00:24:59.321 "num_base_bdevs": 4, 00:24:59.321 "num_base_bdevs_discovered": 4, 00:24:59.321 "num_base_bdevs_operational": 4, 00:24:59.321 "base_bdevs_list": [ 00:24:59.321 { 00:24:59.321 "name": "BaseBdev1", 00:24:59.321 "uuid": "89d63878-93ee-5816-9690-22b4c1cbd4fe", 00:24:59.321 "is_configured": true, 00:24:59.321 "data_offset": 2048, 00:24:59.321 "data_size": 63488 00:24:59.321 }, 00:24:59.321 { 00:24:59.321 "name": "BaseBdev2", 00:24:59.321 "uuid": "28f9c098-6844-5920-8053-c32d56fb58f5", 00:24:59.321 "is_configured": true, 00:24:59.321 "data_offset": 2048, 00:24:59.321 "data_size": 63488 00:24:59.321 }, 00:24:59.321 { 00:24:59.321 "name": "BaseBdev3", 00:24:59.321 "uuid": "34d2b64a-43ec-55e0-90cf-625341a45917", 00:24:59.321 "is_configured": true, 00:24:59.321 "data_offset": 2048, 00:24:59.321 "data_size": 63488 00:24:59.321 }, 00:24:59.321 { 00:24:59.321 "name": "BaseBdev4", 00:24:59.321 "uuid": "417c9663-9f0d-5614-a266-d46f18348323", 00:24:59.321 "is_configured": true, 00:24:59.321 "data_offset": 2048, 00:24:59.321 "data_size": 63488 00:24:59.321 } 00:24:59.321 ] 00:24:59.321 }' 00:24:59.321 00:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.321 00:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:00.259 [2024-07-25 00:51:22.781932] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.259 [2024-07-25 00:51:22.781981] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.259 [2024-07-25 00:51:22.784406] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.259 [2024-07-25 00:51:22.784451] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.259 [2024-07-25 00:51:22.784488] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:00.259 [2024-07-25 00:51:22.784495] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:25:00.259 0 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 137854 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 137854 ']' 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 137854 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137854 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137854' 00:25:00.259 killing process with pid 137854 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 137854 00:25:00.259 [2024-07-25 00:51:22.824381] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:00.259 00:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 137854 00:25:00.519 [2024-07-25 00:51:23.123530] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:01.897 00:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.XtCTD1nN3f 00:25:01.897 00:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:25:01.897 00:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:25:01.897 ************************************ 00:25:01.897 END TEST raid_write_error_test 00:25:01.897 ************************************ 00:25:01.897 00:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:25:01.897 00:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:25:01.897 00:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:01.897 00:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:01.897 00:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:25:01.897 00:25:01.897 real 0m8.288s 00:25:01.897 user 0m12.225s 00:25:01.897 sys 0m1.199s 00:25:01.897 00:51:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:01.897 00:51:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.897 00:51:24 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:25:01.897 00:51:24 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:25:01.897 00:51:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:01.897 00:51:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.897 00:51:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:01.897 ************************************ 00:25:01.897 START TEST raid_state_function_test 00:25:01.897 ************************************ 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=138057 00:25:01.897 Process raid pid: 138057 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 138057' 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 138057 /var/tmp/spdk-raid.sock 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 138057 ']' 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.897 00:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:02.156 [2024-07-25 00:51:24.548654] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:25:02.156 [2024-07-25 00:51:24.549883] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.156 [2024-07-25 00:51:24.737345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.415 [2024-07-25 00:51:24.919656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.674 [2024-07-25 00:51:25.125909] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.933 00:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.933 00:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:25:02.933 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:03.192 [2024-07-25 00:51:25.636296] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:03.192 [2024-07-25 00:51:25.636369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:03.192 [2024-07-25 00:51:25.636380] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.192 [2024-07-25 00:51:25.636401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.192 [2024-07-25 00:51:25.636408] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:03.192 [2024-07-25 00:51:25.636423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:03.192 [2024-07-25 00:51:25.636429] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:03.192 [2024-07-25 00:51:25.636449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.192 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.451 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.451 "name": "Existed_Raid", 00:25:03.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.451 "strip_size_kb": 64, 00:25:03.451 "state": "configuring", 00:25:03.451 "raid_level": "concat", 00:25:03.451 "superblock": false, 00:25:03.451 "num_base_bdevs": 4, 00:25:03.451 "num_base_bdevs_discovered": 0, 00:25:03.451 "num_base_bdevs_operational": 4, 00:25:03.451 "base_bdevs_list": [ 00:25:03.451 { 00:25:03.451 "name": "BaseBdev1", 00:25:03.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.451 "is_configured": false, 00:25:03.451 "data_offset": 0, 00:25:03.451 "data_size": 0 00:25:03.451 }, 00:25:03.451 { 00:25:03.451 "name": "BaseBdev2", 00:25:03.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.451 "is_configured": false, 00:25:03.451 "data_offset": 0, 00:25:03.451 "data_size": 0 00:25:03.451 }, 00:25:03.451 { 00:25:03.451 "name": "BaseBdev3", 00:25:03.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.451 "is_configured": false, 00:25:03.451 "data_offset": 0, 00:25:03.451 "data_size": 0 00:25:03.451 }, 00:25:03.451 { 00:25:03.451 "name": "BaseBdev4", 00:25:03.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.451 "is_configured": false, 00:25:03.451 "data_offset": 0, 00:25:03.451 "data_size": 0 00:25:03.451 } 00:25:03.451 ] 00:25:03.451 }' 00:25:03.451 00:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.451 00:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.017 00:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:04.018 [2024-07-25 00:51:26.532344] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:04.018 [2024-07-25 00:51:26.532374] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:04.018 00:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:04.276 [2024-07-25 00:51:26.776385] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:04.276 [2024-07-25 00:51:26.776430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:04.276 [2024-07-25 00:51:26.776438] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:04.276 [2024-07-25 00:51:26.776474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:04.276 [2024-07-25 00:51:26.776481] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:04.276 [2024-07-25 00:51:26.776507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:04.276 [2024-07-25 00:51:26.776514] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:04.276 [2024-07-25 00:51:26.776535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:04.276 00:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:04.535 [2024-07-25 00:51:27.000031] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:04.535 BaseBdev1 00:25:04.535 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:04.535 00:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:04.535 00:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:04.535 00:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:04.535 00:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:04.535 00:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:04.535 00:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:04.794 00:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:05.052 [ 00:25:05.052 { 00:25:05.052 "name": "BaseBdev1", 00:25:05.052 "aliases": [ 00:25:05.052 "da259b00-3221-4156-8832-f121964b360d" 00:25:05.052 ], 00:25:05.052 "product_name": "Malloc disk", 00:25:05.052 "block_size": 512, 00:25:05.052 "num_blocks": 65536, 00:25:05.052 "uuid": "da259b00-3221-4156-8832-f121964b360d", 00:25:05.052 "assigned_rate_limits": { 00:25:05.052 "rw_ios_per_sec": 0, 00:25:05.052 "rw_mbytes_per_sec": 0, 00:25:05.052 "r_mbytes_per_sec": 0, 00:25:05.053 "w_mbytes_per_sec": 0 00:25:05.053 }, 00:25:05.053 "claimed": true, 00:25:05.053 "claim_type": "exclusive_write", 00:25:05.053 "zoned": false, 00:25:05.053 "supported_io_types": { 00:25:05.053 "read": true, 00:25:05.053 "write": true, 00:25:05.053 "unmap": true, 00:25:05.053 "flush": true, 00:25:05.053 "reset": true, 00:25:05.053 "nvme_admin": false, 00:25:05.053 "nvme_io": false, 00:25:05.053 "nvme_io_md": false, 00:25:05.053 "write_zeroes": true, 00:25:05.053 "zcopy": true, 00:25:05.053 "get_zone_info": false, 00:25:05.053 "zone_management": false, 00:25:05.053 "zone_append": false, 00:25:05.053 "compare": false, 00:25:05.053 "compare_and_write": false, 00:25:05.053 "abort": true, 00:25:05.053 "seek_hole": false, 00:25:05.053 "seek_data": false, 00:25:05.053 "copy": true, 00:25:05.053 "nvme_iov_md": false 00:25:05.053 }, 00:25:05.053 "memory_domains": [ 00:25:05.053 { 00:25:05.053 "dma_device_id": "system", 00:25:05.053 "dma_device_type": 1 00:25:05.053 }, 00:25:05.053 { 00:25:05.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.053 "dma_device_type": 2 00:25:05.053 } 00:25:05.053 ], 00:25:05.053 "driver_specific": {} 00:25:05.053 } 00:25:05.053 ] 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.053 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.312 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:05.312 "name": "Existed_Raid", 00:25:05.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.312 "strip_size_kb": 64, 00:25:05.312 "state": "configuring", 00:25:05.312 "raid_level": "concat", 00:25:05.312 "superblock": false, 00:25:05.312 "num_base_bdevs": 4, 00:25:05.312 "num_base_bdevs_discovered": 1, 00:25:05.312 "num_base_bdevs_operational": 4, 00:25:05.312 "base_bdevs_list": [ 00:25:05.312 { 00:25:05.312 "name": "BaseBdev1", 00:25:05.312 "uuid": "da259b00-3221-4156-8832-f121964b360d", 00:25:05.312 "is_configured": true, 00:25:05.312 "data_offset": 0, 00:25:05.312 "data_size": 65536 00:25:05.312 }, 00:25:05.312 { 00:25:05.312 "name": "BaseBdev2", 00:25:05.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.312 "is_configured": false, 00:25:05.312 "data_offset": 0, 00:25:05.312 "data_size": 0 00:25:05.312 }, 00:25:05.312 { 00:25:05.312 "name": "BaseBdev3", 00:25:05.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.312 "is_configured": false, 00:25:05.312 "data_offset": 0, 00:25:05.312 "data_size": 0 00:25:05.312 }, 00:25:05.312 { 00:25:05.312 "name": "BaseBdev4", 00:25:05.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.312 "is_configured": false, 00:25:05.312 "data_offset": 0, 00:25:05.312 "data_size": 0 00:25:05.312 } 00:25:05.312 ] 00:25:05.312 }' 00:25:05.312 00:51:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:05.312 00:51:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.881 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:05.881 [2024-07-25 00:51:28.496410] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:05.881 [2024-07-25 00:51:28.496460] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:25:05.881 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:06.140 [2024-07-25 00:51:28.672446] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:06.140 [2024-07-25 00:51:28.674376] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:06.140 [2024-07-25 00:51:28.674426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:06.140 [2024-07-25 00:51:28.674435] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:06.140 [2024-07-25 00:51:28.674465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:06.140 [2024-07-25 00:51:28.674473] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:06.140 [2024-07-25 00:51:28.674488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.140 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.399 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.399 "name": "Existed_Raid", 00:25:06.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.399 "strip_size_kb": 64, 00:25:06.399 "state": "configuring", 00:25:06.399 "raid_level": "concat", 00:25:06.399 "superblock": false, 00:25:06.399 "num_base_bdevs": 4, 00:25:06.399 "num_base_bdevs_discovered": 1, 00:25:06.399 "num_base_bdevs_operational": 4, 00:25:06.399 "base_bdevs_list": [ 00:25:06.399 { 00:25:06.399 "name": "BaseBdev1", 00:25:06.399 "uuid": "da259b00-3221-4156-8832-f121964b360d", 00:25:06.399 "is_configured": true, 00:25:06.399 "data_offset": 0, 00:25:06.399 "data_size": 65536 00:25:06.399 }, 00:25:06.399 { 00:25:06.399 "name": "BaseBdev2", 00:25:06.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.399 "is_configured": false, 00:25:06.399 "data_offset": 0, 00:25:06.399 "data_size": 0 00:25:06.399 }, 00:25:06.399 { 00:25:06.399 "name": "BaseBdev3", 00:25:06.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.399 "is_configured": false, 00:25:06.399 "data_offset": 0, 00:25:06.399 "data_size": 0 00:25:06.399 }, 00:25:06.399 { 00:25:06.399 "name": "BaseBdev4", 00:25:06.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.399 "is_configured": false, 00:25:06.399 "data_offset": 0, 00:25:06.399 "data_size": 0 00:25:06.399 } 00:25:06.399 ] 00:25:06.399 }' 00:25:06.399 00:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.399 00:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.966 00:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:07.225 [2024-07-25 00:51:29.765583] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:07.225 BaseBdev2 00:25:07.225 00:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:07.226 00:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:07.226 00:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:07.226 00:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:07.226 00:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:07.226 00:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:07.226 00:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:07.485 00:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:07.485 [ 00:25:07.485 { 00:25:07.485 "name": "BaseBdev2", 00:25:07.485 "aliases": [ 00:25:07.485 "1688760b-d331-43c6-9ce1-7876e6db2820" 00:25:07.485 ], 00:25:07.485 "product_name": "Malloc disk", 00:25:07.485 "block_size": 512, 00:25:07.485 "num_blocks": 65536, 00:25:07.485 "uuid": "1688760b-d331-43c6-9ce1-7876e6db2820", 00:25:07.485 "assigned_rate_limits": { 00:25:07.485 "rw_ios_per_sec": 0, 00:25:07.485 "rw_mbytes_per_sec": 0, 00:25:07.485 "r_mbytes_per_sec": 0, 00:25:07.485 "w_mbytes_per_sec": 0 00:25:07.485 }, 00:25:07.485 "claimed": true, 00:25:07.485 "claim_type": "exclusive_write", 00:25:07.485 "zoned": false, 00:25:07.485 "supported_io_types": { 00:25:07.485 "read": true, 00:25:07.485 "write": true, 00:25:07.485 "unmap": true, 00:25:07.485 "flush": true, 00:25:07.485 "reset": true, 00:25:07.485 "nvme_admin": false, 00:25:07.485 "nvme_io": false, 00:25:07.485 "nvme_io_md": false, 00:25:07.485 "write_zeroes": true, 00:25:07.485 "zcopy": true, 00:25:07.485 "get_zone_info": false, 00:25:07.485 "zone_management": false, 00:25:07.485 "zone_append": false, 00:25:07.485 "compare": false, 00:25:07.485 "compare_and_write": false, 00:25:07.485 "abort": true, 00:25:07.485 "seek_hole": false, 00:25:07.485 "seek_data": false, 00:25:07.485 "copy": true, 00:25:07.485 "nvme_iov_md": false 00:25:07.485 }, 00:25:07.485 "memory_domains": [ 00:25:07.485 { 00:25:07.485 "dma_device_id": "system", 00:25:07.485 "dma_device_type": 1 00:25:07.485 }, 00:25:07.485 { 00:25:07.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.485 "dma_device_type": 2 00:25:07.485 } 00:25:07.485 ], 00:25:07.485 "driver_specific": {} 00:25:07.485 } 00:25:07.485 ] 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.744 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.002 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:08.002 "name": "Existed_Raid", 00:25:08.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.002 "strip_size_kb": 64, 00:25:08.002 "state": "configuring", 00:25:08.002 "raid_level": "concat", 00:25:08.002 "superblock": false, 00:25:08.002 "num_base_bdevs": 4, 00:25:08.002 "num_base_bdevs_discovered": 2, 00:25:08.002 "num_base_bdevs_operational": 4, 00:25:08.002 "base_bdevs_list": [ 00:25:08.002 { 00:25:08.002 "name": "BaseBdev1", 00:25:08.002 "uuid": "da259b00-3221-4156-8832-f121964b360d", 00:25:08.002 "is_configured": true, 00:25:08.002 "data_offset": 0, 00:25:08.002 "data_size": 65536 00:25:08.002 }, 00:25:08.002 { 00:25:08.002 "name": "BaseBdev2", 00:25:08.002 "uuid": "1688760b-d331-43c6-9ce1-7876e6db2820", 00:25:08.002 "is_configured": true, 00:25:08.002 "data_offset": 0, 00:25:08.002 "data_size": 65536 00:25:08.002 }, 00:25:08.002 { 00:25:08.002 "name": "BaseBdev3", 00:25:08.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.002 "is_configured": false, 00:25:08.002 "data_offset": 0, 00:25:08.002 "data_size": 0 00:25:08.002 }, 00:25:08.002 { 00:25:08.002 "name": "BaseBdev4", 00:25:08.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.002 "is_configured": false, 00:25:08.002 "data_offset": 0, 00:25:08.002 "data_size": 0 00:25:08.002 } 00:25:08.002 ] 00:25:08.002 }' 00:25:08.002 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:08.002 00:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.570 00:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:08.570 [2024-07-25 00:51:31.153378] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:08.570 BaseBdev3 00:25:08.570 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:08.570 00:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:08.570 00:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:08.570 00:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:08.570 00:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:08.570 00:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:08.570 00:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:08.830 00:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:09.090 [ 00:25:09.090 { 00:25:09.090 "name": "BaseBdev3", 00:25:09.090 "aliases": [ 00:25:09.090 "d93b3005-737e-4ab7-9b5c-f276cd196b17" 00:25:09.090 ], 00:25:09.090 "product_name": "Malloc disk", 00:25:09.090 "block_size": 512, 00:25:09.090 "num_blocks": 65536, 00:25:09.090 "uuid": "d93b3005-737e-4ab7-9b5c-f276cd196b17", 00:25:09.090 "assigned_rate_limits": { 00:25:09.090 "rw_ios_per_sec": 0, 00:25:09.090 "rw_mbytes_per_sec": 0, 00:25:09.090 "r_mbytes_per_sec": 0, 00:25:09.090 "w_mbytes_per_sec": 0 00:25:09.090 }, 00:25:09.090 "claimed": true, 00:25:09.090 "claim_type": "exclusive_write", 00:25:09.090 "zoned": false, 00:25:09.090 "supported_io_types": { 00:25:09.090 "read": true, 00:25:09.090 "write": true, 00:25:09.090 "unmap": true, 00:25:09.090 "flush": true, 00:25:09.090 "reset": true, 00:25:09.090 "nvme_admin": false, 00:25:09.090 "nvme_io": false, 00:25:09.090 "nvme_io_md": false, 00:25:09.090 "write_zeroes": true, 00:25:09.090 "zcopy": true, 00:25:09.090 "get_zone_info": false, 00:25:09.090 "zone_management": false, 00:25:09.090 "zone_append": false, 00:25:09.090 "compare": false, 00:25:09.090 "compare_and_write": false, 00:25:09.090 "abort": true, 00:25:09.090 "seek_hole": false, 00:25:09.090 "seek_data": false, 00:25:09.090 "copy": true, 00:25:09.090 "nvme_iov_md": false 00:25:09.090 }, 00:25:09.090 "memory_domains": [ 00:25:09.090 { 00:25:09.090 "dma_device_id": "system", 00:25:09.090 "dma_device_type": 1 00:25:09.090 }, 00:25:09.090 { 00:25:09.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.090 "dma_device_type": 2 00:25:09.090 } 00:25:09.090 ], 00:25:09.090 "driver_specific": {} 00:25:09.090 } 00:25:09.090 ] 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.090 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.350 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.350 "name": "Existed_Raid", 00:25:09.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.350 "strip_size_kb": 64, 00:25:09.350 "state": "configuring", 00:25:09.350 "raid_level": "concat", 00:25:09.350 "superblock": false, 00:25:09.350 "num_base_bdevs": 4, 00:25:09.350 "num_base_bdevs_discovered": 3, 00:25:09.350 "num_base_bdevs_operational": 4, 00:25:09.350 "base_bdevs_list": [ 00:25:09.350 { 00:25:09.350 "name": "BaseBdev1", 00:25:09.350 "uuid": "da259b00-3221-4156-8832-f121964b360d", 00:25:09.350 "is_configured": true, 00:25:09.350 "data_offset": 0, 00:25:09.350 "data_size": 65536 00:25:09.350 }, 00:25:09.350 { 00:25:09.350 "name": "BaseBdev2", 00:25:09.350 "uuid": "1688760b-d331-43c6-9ce1-7876e6db2820", 00:25:09.350 "is_configured": true, 00:25:09.350 "data_offset": 0, 00:25:09.350 "data_size": 65536 00:25:09.350 }, 00:25:09.350 { 00:25:09.350 "name": "BaseBdev3", 00:25:09.350 "uuid": "d93b3005-737e-4ab7-9b5c-f276cd196b17", 00:25:09.350 "is_configured": true, 00:25:09.350 "data_offset": 0, 00:25:09.350 "data_size": 65536 00:25:09.350 }, 00:25:09.350 { 00:25:09.350 "name": "BaseBdev4", 00:25:09.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.350 "is_configured": false, 00:25:09.350 "data_offset": 0, 00:25:09.350 "data_size": 0 00:25:09.350 } 00:25:09.350 ] 00:25:09.350 }' 00:25:09.350 00:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.350 00:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.917 00:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:10.176 [2024-07-25 00:51:32.763880] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:10.176 [2024-07-25 00:51:32.763926] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:25:10.176 [2024-07-25 00:51:32.763933] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:25:10.176 [2024-07-25 00:51:32.764023] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:10.176 [2024-07-25 00:51:32.764298] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:25:10.176 [2024-07-25 00:51:32.764308] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:25:10.176 [2024-07-25 00:51:32.764545] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.176 BaseBdev4 00:25:10.176 00:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:10.176 00:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:10.176 00:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:10.176 00:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:10.176 00:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:10.176 00:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:10.176 00:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:10.434 00:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:10.694 [ 00:25:10.694 { 00:25:10.694 "name": "BaseBdev4", 00:25:10.694 "aliases": [ 00:25:10.694 "e6e271c2-0b90-4b05-b6f6-091f4c93f627" 00:25:10.694 ], 00:25:10.694 "product_name": "Malloc disk", 00:25:10.694 "block_size": 512, 00:25:10.694 "num_blocks": 65536, 00:25:10.694 "uuid": "e6e271c2-0b90-4b05-b6f6-091f4c93f627", 00:25:10.694 "assigned_rate_limits": { 00:25:10.694 "rw_ios_per_sec": 0, 00:25:10.694 "rw_mbytes_per_sec": 0, 00:25:10.694 "r_mbytes_per_sec": 0, 00:25:10.694 "w_mbytes_per_sec": 0 00:25:10.694 }, 00:25:10.694 "claimed": true, 00:25:10.694 "claim_type": "exclusive_write", 00:25:10.694 "zoned": false, 00:25:10.694 "supported_io_types": { 00:25:10.694 "read": true, 00:25:10.694 "write": true, 00:25:10.694 "unmap": true, 00:25:10.694 "flush": true, 00:25:10.694 "reset": true, 00:25:10.694 "nvme_admin": false, 00:25:10.694 "nvme_io": false, 00:25:10.694 "nvme_io_md": false, 00:25:10.694 "write_zeroes": true, 00:25:10.694 "zcopy": true, 00:25:10.694 "get_zone_info": false, 00:25:10.694 "zone_management": false, 00:25:10.694 "zone_append": false, 00:25:10.694 "compare": false, 00:25:10.694 "compare_and_write": false, 00:25:10.694 "abort": true, 00:25:10.694 "seek_hole": false, 00:25:10.694 "seek_data": false, 00:25:10.694 "copy": true, 00:25:10.694 "nvme_iov_md": false 00:25:10.694 }, 00:25:10.694 "memory_domains": [ 00:25:10.694 { 00:25:10.694 "dma_device_id": "system", 00:25:10.694 "dma_device_type": 1 00:25:10.694 }, 00:25:10.694 { 00:25:10.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.694 "dma_device_type": 2 00:25:10.694 } 00:25:10.694 ], 00:25:10.694 "driver_specific": {} 00:25:10.694 } 00:25:10.694 ] 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.694 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.953 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:10.953 "name": "Existed_Raid", 00:25:10.953 "uuid": "163247e1-83b5-4951-ae31-b63085c2b9dc", 00:25:10.953 "strip_size_kb": 64, 00:25:10.953 "state": "online", 00:25:10.953 "raid_level": "concat", 00:25:10.953 "superblock": false, 00:25:10.953 "num_base_bdevs": 4, 00:25:10.953 "num_base_bdevs_discovered": 4, 00:25:10.953 "num_base_bdevs_operational": 4, 00:25:10.953 "base_bdevs_list": [ 00:25:10.953 { 00:25:10.953 "name": "BaseBdev1", 00:25:10.953 "uuid": "da259b00-3221-4156-8832-f121964b360d", 00:25:10.953 "is_configured": true, 00:25:10.953 "data_offset": 0, 00:25:10.953 "data_size": 65536 00:25:10.953 }, 00:25:10.953 { 00:25:10.953 "name": "BaseBdev2", 00:25:10.953 "uuid": "1688760b-d331-43c6-9ce1-7876e6db2820", 00:25:10.953 "is_configured": true, 00:25:10.953 "data_offset": 0, 00:25:10.953 "data_size": 65536 00:25:10.953 }, 00:25:10.953 { 00:25:10.953 "name": "BaseBdev3", 00:25:10.953 "uuid": "d93b3005-737e-4ab7-9b5c-f276cd196b17", 00:25:10.953 "is_configured": true, 00:25:10.953 "data_offset": 0, 00:25:10.953 "data_size": 65536 00:25:10.953 }, 00:25:10.953 { 00:25:10.953 "name": "BaseBdev4", 00:25:10.953 "uuid": "e6e271c2-0b90-4b05-b6f6-091f4c93f627", 00:25:10.953 "is_configured": true, 00:25:10.953 "data_offset": 0, 00:25:10.953 "data_size": 65536 00:25:10.953 } 00:25:10.953 ] 00:25:10.953 }' 00:25:10.953 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:10.953 00:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.520 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:11.520 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:11.520 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:11.520 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:11.520 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:11.520 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:11.520 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:11.520 00:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:11.780 [2024-07-25 00:51:34.248397] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:11.780 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:11.780 "name": "Existed_Raid", 00:25:11.780 "aliases": [ 00:25:11.780 "163247e1-83b5-4951-ae31-b63085c2b9dc" 00:25:11.780 ], 00:25:11.780 "product_name": "Raid Volume", 00:25:11.780 "block_size": 512, 00:25:11.780 "num_blocks": 262144, 00:25:11.780 "uuid": "163247e1-83b5-4951-ae31-b63085c2b9dc", 00:25:11.780 "assigned_rate_limits": { 00:25:11.780 "rw_ios_per_sec": 0, 00:25:11.780 "rw_mbytes_per_sec": 0, 00:25:11.780 "r_mbytes_per_sec": 0, 00:25:11.780 "w_mbytes_per_sec": 0 00:25:11.780 }, 00:25:11.780 "claimed": false, 00:25:11.780 "zoned": false, 00:25:11.780 "supported_io_types": { 00:25:11.780 "read": true, 00:25:11.780 "write": true, 00:25:11.780 "unmap": true, 00:25:11.780 "flush": true, 00:25:11.780 "reset": true, 00:25:11.780 "nvme_admin": false, 00:25:11.780 "nvme_io": false, 00:25:11.780 "nvme_io_md": false, 00:25:11.780 "write_zeroes": true, 00:25:11.780 "zcopy": false, 00:25:11.780 "get_zone_info": false, 00:25:11.780 "zone_management": false, 00:25:11.780 "zone_append": false, 00:25:11.780 "compare": false, 00:25:11.780 "compare_and_write": false, 00:25:11.780 "abort": false, 00:25:11.780 "seek_hole": false, 00:25:11.780 "seek_data": false, 00:25:11.780 "copy": false, 00:25:11.780 "nvme_iov_md": false 00:25:11.780 }, 00:25:11.780 "memory_domains": [ 00:25:11.780 { 00:25:11.780 "dma_device_id": "system", 00:25:11.780 "dma_device_type": 1 00:25:11.780 }, 00:25:11.780 { 00:25:11.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.780 "dma_device_type": 2 00:25:11.780 }, 00:25:11.780 { 00:25:11.780 "dma_device_id": "system", 00:25:11.780 "dma_device_type": 1 00:25:11.780 }, 00:25:11.780 { 00:25:11.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.780 "dma_device_type": 2 00:25:11.780 }, 00:25:11.780 { 00:25:11.780 "dma_device_id": "system", 00:25:11.780 "dma_device_type": 1 00:25:11.780 }, 00:25:11.780 { 00:25:11.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.780 "dma_device_type": 2 00:25:11.780 }, 00:25:11.780 { 00:25:11.780 "dma_device_id": "system", 00:25:11.780 "dma_device_type": 1 00:25:11.780 }, 00:25:11.780 { 00:25:11.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.780 "dma_device_type": 2 00:25:11.780 } 00:25:11.780 ], 00:25:11.780 "driver_specific": { 00:25:11.780 "raid": { 00:25:11.780 "uuid": "163247e1-83b5-4951-ae31-b63085c2b9dc", 00:25:11.780 "strip_size_kb": 64, 00:25:11.780 "state": "online", 00:25:11.780 "raid_level": "concat", 00:25:11.780 "superblock": false, 00:25:11.780 "num_base_bdevs": 4, 00:25:11.780 "num_base_bdevs_discovered": 4, 00:25:11.780 "num_base_bdevs_operational": 4, 00:25:11.780 "base_bdevs_list": [ 00:25:11.780 { 00:25:11.780 "name": "BaseBdev1", 00:25:11.780 "uuid": "da259b00-3221-4156-8832-f121964b360d", 00:25:11.780 "is_configured": true, 00:25:11.780 "data_offset": 0, 00:25:11.780 "data_size": 65536 00:25:11.780 }, 00:25:11.780 { 00:25:11.780 "name": "BaseBdev2", 00:25:11.780 "uuid": "1688760b-d331-43c6-9ce1-7876e6db2820", 00:25:11.780 "is_configured": true, 00:25:11.780 "data_offset": 0, 00:25:11.780 "data_size": 65536 00:25:11.780 }, 00:25:11.780 { 00:25:11.780 "name": "BaseBdev3", 00:25:11.780 "uuid": "d93b3005-737e-4ab7-9b5c-f276cd196b17", 00:25:11.780 "is_configured": true, 00:25:11.780 "data_offset": 0, 00:25:11.780 "data_size": 65536 00:25:11.780 }, 00:25:11.780 { 00:25:11.780 "name": "BaseBdev4", 00:25:11.780 "uuid": "e6e271c2-0b90-4b05-b6f6-091f4c93f627", 00:25:11.780 "is_configured": true, 00:25:11.780 "data_offset": 0, 00:25:11.780 "data_size": 65536 00:25:11.780 } 00:25:11.780 ] 00:25:11.780 } 00:25:11.780 } 00:25:11.780 }' 00:25:11.780 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:11.780 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:11.780 BaseBdev2 00:25:11.780 BaseBdev3 00:25:11.780 BaseBdev4' 00:25:11.780 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:11.780 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:11.780 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:12.040 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:12.040 "name": "BaseBdev1", 00:25:12.040 "aliases": [ 00:25:12.040 "da259b00-3221-4156-8832-f121964b360d" 00:25:12.040 ], 00:25:12.040 "product_name": "Malloc disk", 00:25:12.040 "block_size": 512, 00:25:12.040 "num_blocks": 65536, 00:25:12.040 "uuid": "da259b00-3221-4156-8832-f121964b360d", 00:25:12.040 "assigned_rate_limits": { 00:25:12.040 "rw_ios_per_sec": 0, 00:25:12.040 "rw_mbytes_per_sec": 0, 00:25:12.040 "r_mbytes_per_sec": 0, 00:25:12.040 "w_mbytes_per_sec": 0 00:25:12.040 }, 00:25:12.040 "claimed": true, 00:25:12.041 "claim_type": "exclusive_write", 00:25:12.041 "zoned": false, 00:25:12.041 "supported_io_types": { 00:25:12.041 "read": true, 00:25:12.041 "write": true, 00:25:12.041 "unmap": true, 00:25:12.041 "flush": true, 00:25:12.041 "reset": true, 00:25:12.041 "nvme_admin": false, 00:25:12.041 "nvme_io": false, 00:25:12.041 "nvme_io_md": false, 00:25:12.041 "write_zeroes": true, 00:25:12.041 "zcopy": true, 00:25:12.041 "get_zone_info": false, 00:25:12.041 "zone_management": false, 00:25:12.041 "zone_append": false, 00:25:12.041 "compare": false, 00:25:12.041 "compare_and_write": false, 00:25:12.041 "abort": true, 00:25:12.041 "seek_hole": false, 00:25:12.041 "seek_data": false, 00:25:12.041 "copy": true, 00:25:12.041 "nvme_iov_md": false 00:25:12.041 }, 00:25:12.041 "memory_domains": [ 00:25:12.041 { 00:25:12.041 "dma_device_id": "system", 00:25:12.041 "dma_device_type": 1 00:25:12.041 }, 00:25:12.041 { 00:25:12.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.041 "dma_device_type": 2 00:25:12.041 } 00:25:12.041 ], 00:25:12.041 "driver_specific": {} 00:25:12.041 }' 00:25:12.041 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.041 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.041 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:12.041 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.041 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.041 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:12.041 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.301 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.301 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:12.301 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.301 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.301 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:12.301 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:12.301 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:12.301 00:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:12.560 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:12.560 "name": "BaseBdev2", 00:25:12.560 "aliases": [ 00:25:12.560 "1688760b-d331-43c6-9ce1-7876e6db2820" 00:25:12.560 ], 00:25:12.560 "product_name": "Malloc disk", 00:25:12.560 "block_size": 512, 00:25:12.560 "num_blocks": 65536, 00:25:12.560 "uuid": "1688760b-d331-43c6-9ce1-7876e6db2820", 00:25:12.560 "assigned_rate_limits": { 00:25:12.560 "rw_ios_per_sec": 0, 00:25:12.560 "rw_mbytes_per_sec": 0, 00:25:12.560 "r_mbytes_per_sec": 0, 00:25:12.560 "w_mbytes_per_sec": 0 00:25:12.560 }, 00:25:12.560 "claimed": true, 00:25:12.560 "claim_type": "exclusive_write", 00:25:12.560 "zoned": false, 00:25:12.560 "supported_io_types": { 00:25:12.560 "read": true, 00:25:12.560 "write": true, 00:25:12.560 "unmap": true, 00:25:12.560 "flush": true, 00:25:12.560 "reset": true, 00:25:12.560 "nvme_admin": false, 00:25:12.560 "nvme_io": false, 00:25:12.560 "nvme_io_md": false, 00:25:12.560 "write_zeroes": true, 00:25:12.560 "zcopy": true, 00:25:12.560 "get_zone_info": false, 00:25:12.560 "zone_management": false, 00:25:12.560 "zone_append": false, 00:25:12.560 "compare": false, 00:25:12.560 "compare_and_write": false, 00:25:12.560 "abort": true, 00:25:12.560 "seek_hole": false, 00:25:12.560 "seek_data": false, 00:25:12.560 "copy": true, 00:25:12.560 "nvme_iov_md": false 00:25:12.560 }, 00:25:12.560 "memory_domains": [ 00:25:12.560 { 00:25:12.560 "dma_device_id": "system", 00:25:12.560 "dma_device_type": 1 00:25:12.560 }, 00:25:12.560 { 00:25:12.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.560 "dma_device_type": 2 00:25:12.560 } 00:25:12.560 ], 00:25:12.560 "driver_specific": {} 00:25:12.560 }' 00:25:12.560 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.560 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.560 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:12.560 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.819 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.819 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:12.819 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.819 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.819 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:12.819 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.819 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.819 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:12.819 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:13.077 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:13.077 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.077 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:13.077 "name": "BaseBdev3", 00:25:13.077 "aliases": [ 00:25:13.077 "d93b3005-737e-4ab7-9b5c-f276cd196b17" 00:25:13.077 ], 00:25:13.077 "product_name": "Malloc disk", 00:25:13.077 "block_size": 512, 00:25:13.077 "num_blocks": 65536, 00:25:13.077 "uuid": "d93b3005-737e-4ab7-9b5c-f276cd196b17", 00:25:13.077 "assigned_rate_limits": { 00:25:13.077 "rw_ios_per_sec": 0, 00:25:13.077 "rw_mbytes_per_sec": 0, 00:25:13.077 "r_mbytes_per_sec": 0, 00:25:13.077 "w_mbytes_per_sec": 0 00:25:13.077 }, 00:25:13.077 "claimed": true, 00:25:13.077 "claim_type": "exclusive_write", 00:25:13.077 "zoned": false, 00:25:13.077 "supported_io_types": { 00:25:13.077 "read": true, 00:25:13.077 "write": true, 00:25:13.077 "unmap": true, 00:25:13.077 "flush": true, 00:25:13.077 "reset": true, 00:25:13.077 "nvme_admin": false, 00:25:13.077 "nvme_io": false, 00:25:13.077 "nvme_io_md": false, 00:25:13.077 "write_zeroes": true, 00:25:13.077 "zcopy": true, 00:25:13.077 "get_zone_info": false, 00:25:13.077 "zone_management": false, 00:25:13.077 "zone_append": false, 00:25:13.077 "compare": false, 00:25:13.077 "compare_and_write": false, 00:25:13.077 "abort": true, 00:25:13.077 "seek_hole": false, 00:25:13.077 "seek_data": false, 00:25:13.077 "copy": true, 00:25:13.077 "nvme_iov_md": false 00:25:13.077 }, 00:25:13.077 "memory_domains": [ 00:25:13.077 { 00:25:13.077 "dma_device_id": "system", 00:25:13.077 "dma_device_type": 1 00:25:13.077 }, 00:25:13.077 { 00:25:13.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.077 "dma_device_type": 2 00:25:13.077 } 00:25:13.077 ], 00:25:13.077 "driver_specific": {} 00:25:13.077 }' 00:25:13.077 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.336 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.336 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:13.336 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.336 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.336 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.336 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.336 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.596 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.596 00:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.596 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.596 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.596 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:13.596 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.596 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:13.855 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:13.855 "name": "BaseBdev4", 00:25:13.855 "aliases": [ 00:25:13.855 "e6e271c2-0b90-4b05-b6f6-091f4c93f627" 00:25:13.855 ], 00:25:13.855 "product_name": "Malloc disk", 00:25:13.855 "block_size": 512, 00:25:13.855 "num_blocks": 65536, 00:25:13.855 "uuid": "e6e271c2-0b90-4b05-b6f6-091f4c93f627", 00:25:13.855 "assigned_rate_limits": { 00:25:13.855 "rw_ios_per_sec": 0, 00:25:13.855 "rw_mbytes_per_sec": 0, 00:25:13.856 "r_mbytes_per_sec": 0, 00:25:13.856 "w_mbytes_per_sec": 0 00:25:13.856 }, 00:25:13.856 "claimed": true, 00:25:13.856 "claim_type": "exclusive_write", 00:25:13.856 "zoned": false, 00:25:13.856 "supported_io_types": { 00:25:13.856 "read": true, 00:25:13.856 "write": true, 00:25:13.856 "unmap": true, 00:25:13.856 "flush": true, 00:25:13.856 "reset": true, 00:25:13.856 "nvme_admin": false, 00:25:13.856 "nvme_io": false, 00:25:13.856 "nvme_io_md": false, 00:25:13.856 "write_zeroes": true, 00:25:13.856 "zcopy": true, 00:25:13.856 "get_zone_info": false, 00:25:13.856 "zone_management": false, 00:25:13.856 "zone_append": false, 00:25:13.856 "compare": false, 00:25:13.856 "compare_and_write": false, 00:25:13.856 "abort": true, 00:25:13.856 "seek_hole": false, 00:25:13.856 "seek_data": false, 00:25:13.856 "copy": true, 00:25:13.856 "nvme_iov_md": false 00:25:13.856 }, 00:25:13.856 "memory_domains": [ 00:25:13.856 { 00:25:13.856 "dma_device_id": "system", 00:25:13.856 "dma_device_type": 1 00:25:13.856 }, 00:25:13.856 { 00:25:13.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.856 "dma_device_type": 2 00:25:13.856 } 00:25:13.856 ], 00:25:13.856 "driver_specific": {} 00:25:13.856 }' 00:25:13.856 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.856 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.856 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:13.856 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.114 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.114 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:14.114 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.114 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.114 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:14.114 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.114 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.114 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:14.114 00:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:14.373 [2024-07-25 00:51:36.927002] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:14.373 [2024-07-25 00:51:36.927032] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:14.373 [2024-07-25 00:51:36.927097] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.631 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.890 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.890 "name": "Existed_Raid", 00:25:14.890 "uuid": "163247e1-83b5-4951-ae31-b63085c2b9dc", 00:25:14.890 "strip_size_kb": 64, 00:25:14.890 "state": "offline", 00:25:14.890 "raid_level": "concat", 00:25:14.890 "superblock": false, 00:25:14.890 "num_base_bdevs": 4, 00:25:14.890 "num_base_bdevs_discovered": 3, 00:25:14.890 "num_base_bdevs_operational": 3, 00:25:14.890 "base_bdevs_list": [ 00:25:14.890 { 00:25:14.890 "name": null, 00:25:14.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.890 "is_configured": false, 00:25:14.890 "data_offset": 0, 00:25:14.890 "data_size": 65536 00:25:14.890 }, 00:25:14.890 { 00:25:14.890 "name": "BaseBdev2", 00:25:14.890 "uuid": "1688760b-d331-43c6-9ce1-7876e6db2820", 00:25:14.890 "is_configured": true, 00:25:14.890 "data_offset": 0, 00:25:14.890 "data_size": 65536 00:25:14.890 }, 00:25:14.890 { 00:25:14.890 "name": "BaseBdev3", 00:25:14.890 "uuid": "d93b3005-737e-4ab7-9b5c-f276cd196b17", 00:25:14.890 "is_configured": true, 00:25:14.890 "data_offset": 0, 00:25:14.890 "data_size": 65536 00:25:14.890 }, 00:25:14.890 { 00:25:14.890 "name": "BaseBdev4", 00:25:14.890 "uuid": "e6e271c2-0b90-4b05-b6f6-091f4c93f627", 00:25:14.890 "is_configured": true, 00:25:14.890 "data_offset": 0, 00:25:14.890 "data_size": 65536 00:25:14.890 } 00:25:14.890 ] 00:25:14.890 }' 00:25:14.890 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.890 00:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.458 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:15.458 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:15.458 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.458 00:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:15.717 00:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:15.717 00:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:15.717 00:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:15.976 [2024-07-25 00:51:38.407678] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:15.976 00:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:15.976 00:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:15.976 00:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.976 00:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:16.235 00:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:16.235 00:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:16.235 00:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:16.494 [2024-07-25 00:51:39.025889] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:16.494 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:16.494 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:16.753 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.753 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:17.012 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:17.012 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:17.012 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:17.272 [2024-07-25 00:51:39.665152] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:17.272 [2024-07-25 00:51:39.665202] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:25:17.272 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:17.272 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:17.272 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.272 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:17.531 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:17.531 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:17.531 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:17.531 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:17.531 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:17.531 00:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:17.790 BaseBdev2 00:25:17.790 00:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:17.790 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:17.790 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:17.790 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:17.790 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:17.790 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:17.790 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:17.790 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:18.049 [ 00:25:18.049 { 00:25:18.049 "name": "BaseBdev2", 00:25:18.049 "aliases": [ 00:25:18.049 "de55dca9-7deb-46ce-8937-c0c6b215d80d" 00:25:18.049 ], 00:25:18.049 "product_name": "Malloc disk", 00:25:18.049 "block_size": 512, 00:25:18.049 "num_blocks": 65536, 00:25:18.049 "uuid": "de55dca9-7deb-46ce-8937-c0c6b215d80d", 00:25:18.049 "assigned_rate_limits": { 00:25:18.049 "rw_ios_per_sec": 0, 00:25:18.049 "rw_mbytes_per_sec": 0, 00:25:18.049 "r_mbytes_per_sec": 0, 00:25:18.049 "w_mbytes_per_sec": 0 00:25:18.049 }, 00:25:18.049 "claimed": false, 00:25:18.049 "zoned": false, 00:25:18.049 "supported_io_types": { 00:25:18.049 "read": true, 00:25:18.049 "write": true, 00:25:18.049 "unmap": true, 00:25:18.049 "flush": true, 00:25:18.049 "reset": true, 00:25:18.049 "nvme_admin": false, 00:25:18.049 "nvme_io": false, 00:25:18.049 "nvme_io_md": false, 00:25:18.049 "write_zeroes": true, 00:25:18.049 "zcopy": true, 00:25:18.049 "get_zone_info": false, 00:25:18.049 "zone_management": false, 00:25:18.049 "zone_append": false, 00:25:18.049 "compare": false, 00:25:18.049 "compare_and_write": false, 00:25:18.049 "abort": true, 00:25:18.049 "seek_hole": false, 00:25:18.049 "seek_data": false, 00:25:18.049 "copy": true, 00:25:18.049 "nvme_iov_md": false 00:25:18.049 }, 00:25:18.049 "memory_domains": [ 00:25:18.049 { 00:25:18.049 "dma_device_id": "system", 00:25:18.049 "dma_device_type": 1 00:25:18.049 }, 00:25:18.049 { 00:25:18.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.049 "dma_device_type": 2 00:25:18.049 } 00:25:18.049 ], 00:25:18.049 "driver_specific": {} 00:25:18.049 } 00:25:18.049 ] 00:25:18.049 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:18.049 00:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:18.049 00:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:18.049 00:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:18.308 BaseBdev3 00:25:18.308 00:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:18.308 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:18.308 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:18.308 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:18.308 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:18.308 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:18.308 00:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:18.567 00:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:18.827 [ 00:25:18.827 { 00:25:18.827 "name": "BaseBdev3", 00:25:18.827 "aliases": [ 00:25:18.827 "11a55391-a187-4087-a898-7f3d2c312ea4" 00:25:18.827 ], 00:25:18.827 "product_name": "Malloc disk", 00:25:18.827 "block_size": 512, 00:25:18.827 "num_blocks": 65536, 00:25:18.827 "uuid": "11a55391-a187-4087-a898-7f3d2c312ea4", 00:25:18.827 "assigned_rate_limits": { 00:25:18.827 "rw_ios_per_sec": 0, 00:25:18.827 "rw_mbytes_per_sec": 0, 00:25:18.827 "r_mbytes_per_sec": 0, 00:25:18.827 "w_mbytes_per_sec": 0 00:25:18.827 }, 00:25:18.827 "claimed": false, 00:25:18.827 "zoned": false, 00:25:18.827 "supported_io_types": { 00:25:18.827 "read": true, 00:25:18.827 "write": true, 00:25:18.827 "unmap": true, 00:25:18.827 "flush": true, 00:25:18.827 "reset": true, 00:25:18.827 "nvme_admin": false, 00:25:18.827 "nvme_io": false, 00:25:18.827 "nvme_io_md": false, 00:25:18.827 "write_zeroes": true, 00:25:18.827 "zcopy": true, 00:25:18.827 "get_zone_info": false, 00:25:18.827 "zone_management": false, 00:25:18.827 "zone_append": false, 00:25:18.827 "compare": false, 00:25:18.827 "compare_and_write": false, 00:25:18.827 "abort": true, 00:25:18.827 "seek_hole": false, 00:25:18.827 "seek_data": false, 00:25:18.827 "copy": true, 00:25:18.827 "nvme_iov_md": false 00:25:18.827 }, 00:25:18.827 "memory_domains": [ 00:25:18.827 { 00:25:18.827 "dma_device_id": "system", 00:25:18.827 "dma_device_type": 1 00:25:18.827 }, 00:25:18.827 { 00:25:18.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.827 "dma_device_type": 2 00:25:18.827 } 00:25:18.827 ], 00:25:18.827 "driver_specific": {} 00:25:18.827 } 00:25:18.827 ] 00:25:18.827 00:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:18.827 00:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:18.827 00:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:18.827 00:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:19.118 BaseBdev4 00:25:19.118 00:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:19.118 00:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:19.118 00:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:19.118 00:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:19.118 00:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:19.118 00:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:19.118 00:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:19.118 00:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:19.377 [ 00:25:19.377 { 00:25:19.377 "name": "BaseBdev4", 00:25:19.377 "aliases": [ 00:25:19.377 "9d29f162-1a47-4af4-a65e-4d8854b6498e" 00:25:19.377 ], 00:25:19.377 "product_name": "Malloc disk", 00:25:19.377 "block_size": 512, 00:25:19.377 "num_blocks": 65536, 00:25:19.377 "uuid": "9d29f162-1a47-4af4-a65e-4d8854b6498e", 00:25:19.377 "assigned_rate_limits": { 00:25:19.377 "rw_ios_per_sec": 0, 00:25:19.377 "rw_mbytes_per_sec": 0, 00:25:19.377 "r_mbytes_per_sec": 0, 00:25:19.377 "w_mbytes_per_sec": 0 00:25:19.377 }, 00:25:19.377 "claimed": false, 00:25:19.377 "zoned": false, 00:25:19.377 "supported_io_types": { 00:25:19.377 "read": true, 00:25:19.377 "write": true, 00:25:19.377 "unmap": true, 00:25:19.377 "flush": true, 00:25:19.377 "reset": true, 00:25:19.377 "nvme_admin": false, 00:25:19.377 "nvme_io": false, 00:25:19.377 "nvme_io_md": false, 00:25:19.377 "write_zeroes": true, 00:25:19.377 "zcopy": true, 00:25:19.377 "get_zone_info": false, 00:25:19.377 "zone_management": false, 00:25:19.377 "zone_append": false, 00:25:19.377 "compare": false, 00:25:19.377 "compare_and_write": false, 00:25:19.377 "abort": true, 00:25:19.377 "seek_hole": false, 00:25:19.377 "seek_data": false, 00:25:19.377 "copy": true, 00:25:19.377 "nvme_iov_md": false 00:25:19.377 }, 00:25:19.377 "memory_domains": [ 00:25:19.377 { 00:25:19.377 "dma_device_id": "system", 00:25:19.377 "dma_device_type": 1 00:25:19.377 }, 00:25:19.377 { 00:25:19.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.377 "dma_device_type": 2 00:25:19.377 } 00:25:19.377 ], 00:25:19.377 "driver_specific": {} 00:25:19.377 } 00:25:19.377 ] 00:25:19.377 00:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:19.377 00:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:19.377 00:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:19.377 00:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:19.634 [2024-07-25 00:51:42.116837] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:19.634 [2024-07-25 00:51:42.116900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:19.634 [2024-07-25 00:51:42.116920] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:19.634 [2024-07-25 00:51:42.118862] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:19.634 [2024-07-25 00:51:42.118931] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.634 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.892 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.892 "name": "Existed_Raid", 00:25:19.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.892 "strip_size_kb": 64, 00:25:19.892 "state": "configuring", 00:25:19.892 "raid_level": "concat", 00:25:19.892 "superblock": false, 00:25:19.892 "num_base_bdevs": 4, 00:25:19.892 "num_base_bdevs_discovered": 3, 00:25:19.892 "num_base_bdevs_operational": 4, 00:25:19.892 "base_bdevs_list": [ 00:25:19.892 { 00:25:19.892 "name": "BaseBdev1", 00:25:19.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.892 "is_configured": false, 00:25:19.892 "data_offset": 0, 00:25:19.892 "data_size": 0 00:25:19.892 }, 00:25:19.892 { 00:25:19.892 "name": "BaseBdev2", 00:25:19.892 "uuid": "de55dca9-7deb-46ce-8937-c0c6b215d80d", 00:25:19.892 "is_configured": true, 00:25:19.892 "data_offset": 0, 00:25:19.892 "data_size": 65536 00:25:19.892 }, 00:25:19.892 { 00:25:19.892 "name": "BaseBdev3", 00:25:19.892 "uuid": "11a55391-a187-4087-a898-7f3d2c312ea4", 00:25:19.892 "is_configured": true, 00:25:19.892 "data_offset": 0, 00:25:19.892 "data_size": 65536 00:25:19.892 }, 00:25:19.892 { 00:25:19.892 "name": "BaseBdev4", 00:25:19.892 "uuid": "9d29f162-1a47-4af4-a65e-4d8854b6498e", 00:25:19.892 "is_configured": true, 00:25:19.892 "data_offset": 0, 00:25:19.892 "data_size": 65536 00:25:19.892 } 00:25:19.892 ] 00:25:19.892 }' 00:25:19.892 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.892 00:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.459 00:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:20.459 [2024-07-25 00:51:43.046878] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.459 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.718 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:20.718 "name": "Existed_Raid", 00:25:20.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.718 "strip_size_kb": 64, 00:25:20.718 "state": "configuring", 00:25:20.718 "raid_level": "concat", 00:25:20.718 "superblock": false, 00:25:20.718 "num_base_bdevs": 4, 00:25:20.718 "num_base_bdevs_discovered": 2, 00:25:20.718 "num_base_bdevs_operational": 4, 00:25:20.718 "base_bdevs_list": [ 00:25:20.718 { 00:25:20.718 "name": "BaseBdev1", 00:25:20.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.718 "is_configured": false, 00:25:20.718 "data_offset": 0, 00:25:20.718 "data_size": 0 00:25:20.718 }, 00:25:20.718 { 00:25:20.718 "name": null, 00:25:20.718 "uuid": "de55dca9-7deb-46ce-8937-c0c6b215d80d", 00:25:20.718 "is_configured": false, 00:25:20.718 "data_offset": 0, 00:25:20.718 "data_size": 65536 00:25:20.718 }, 00:25:20.718 { 00:25:20.718 "name": "BaseBdev3", 00:25:20.718 "uuid": "11a55391-a187-4087-a898-7f3d2c312ea4", 00:25:20.718 "is_configured": true, 00:25:20.718 "data_offset": 0, 00:25:20.718 "data_size": 65536 00:25:20.718 }, 00:25:20.718 { 00:25:20.718 "name": "BaseBdev4", 00:25:20.718 "uuid": "9d29f162-1a47-4af4-a65e-4d8854b6498e", 00:25:20.718 "is_configured": true, 00:25:20.718 "data_offset": 0, 00:25:20.718 "data_size": 65536 00:25:20.718 } 00:25:20.718 ] 00:25:20.718 }' 00:25:20.718 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:20.718 00:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.286 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.286 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:21.545 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:21.545 00:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:21.804 [2024-07-25 00:51:44.254779] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:21.804 BaseBdev1 00:25:21.804 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:21.804 00:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:21.804 00:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:21.804 00:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:21.804 00:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:21.804 00:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:21.804 00:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:21.804 00:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:22.062 [ 00:25:22.062 { 00:25:22.062 "name": "BaseBdev1", 00:25:22.062 "aliases": [ 00:25:22.062 "248154c2-68d6-4215-b320-193f2a1be0ec" 00:25:22.062 ], 00:25:22.062 "product_name": "Malloc disk", 00:25:22.062 "block_size": 512, 00:25:22.062 "num_blocks": 65536, 00:25:22.062 "uuid": "248154c2-68d6-4215-b320-193f2a1be0ec", 00:25:22.062 "assigned_rate_limits": { 00:25:22.062 "rw_ios_per_sec": 0, 00:25:22.062 "rw_mbytes_per_sec": 0, 00:25:22.062 "r_mbytes_per_sec": 0, 00:25:22.062 "w_mbytes_per_sec": 0 00:25:22.062 }, 00:25:22.062 "claimed": true, 00:25:22.062 "claim_type": "exclusive_write", 00:25:22.062 "zoned": false, 00:25:22.062 "supported_io_types": { 00:25:22.062 "read": true, 00:25:22.062 "write": true, 00:25:22.062 "unmap": true, 00:25:22.062 "flush": true, 00:25:22.062 "reset": true, 00:25:22.062 "nvme_admin": false, 00:25:22.062 "nvme_io": false, 00:25:22.062 "nvme_io_md": false, 00:25:22.062 "write_zeroes": true, 00:25:22.062 "zcopy": true, 00:25:22.062 "get_zone_info": false, 00:25:22.062 "zone_management": false, 00:25:22.062 "zone_append": false, 00:25:22.062 "compare": false, 00:25:22.062 "compare_and_write": false, 00:25:22.062 "abort": true, 00:25:22.062 "seek_hole": false, 00:25:22.062 "seek_data": false, 00:25:22.062 "copy": true, 00:25:22.062 "nvme_iov_md": false 00:25:22.062 }, 00:25:22.062 "memory_domains": [ 00:25:22.062 { 00:25:22.062 "dma_device_id": "system", 00:25:22.062 "dma_device_type": 1 00:25:22.062 }, 00:25:22.062 { 00:25:22.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.062 "dma_device_type": 2 00:25:22.062 } 00:25:22.062 ], 00:25:22.062 "driver_specific": {} 00:25:22.062 } 00:25:22.062 ] 00:25:22.062 00:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:22.062 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:22.062 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:22.062 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:22.062 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:22.062 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:22.062 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:22.062 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.063 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.063 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.063 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.063 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.063 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.322 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.322 "name": "Existed_Raid", 00:25:22.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.322 "strip_size_kb": 64, 00:25:22.322 "state": "configuring", 00:25:22.322 "raid_level": "concat", 00:25:22.322 "superblock": false, 00:25:22.322 "num_base_bdevs": 4, 00:25:22.322 "num_base_bdevs_discovered": 3, 00:25:22.322 "num_base_bdevs_operational": 4, 00:25:22.322 "base_bdevs_list": [ 00:25:22.322 { 00:25:22.322 "name": "BaseBdev1", 00:25:22.322 "uuid": "248154c2-68d6-4215-b320-193f2a1be0ec", 00:25:22.322 "is_configured": true, 00:25:22.322 "data_offset": 0, 00:25:22.322 "data_size": 65536 00:25:22.322 }, 00:25:22.322 { 00:25:22.322 "name": null, 00:25:22.322 "uuid": "de55dca9-7deb-46ce-8937-c0c6b215d80d", 00:25:22.322 "is_configured": false, 00:25:22.322 "data_offset": 0, 00:25:22.322 "data_size": 65536 00:25:22.322 }, 00:25:22.322 { 00:25:22.322 "name": "BaseBdev3", 00:25:22.322 "uuid": "11a55391-a187-4087-a898-7f3d2c312ea4", 00:25:22.322 "is_configured": true, 00:25:22.322 "data_offset": 0, 00:25:22.322 "data_size": 65536 00:25:22.322 }, 00:25:22.322 { 00:25:22.322 "name": "BaseBdev4", 00:25:22.322 "uuid": "9d29f162-1a47-4af4-a65e-4d8854b6498e", 00:25:22.322 "is_configured": true, 00:25:22.322 "data_offset": 0, 00:25:22.322 "data_size": 65536 00:25:22.322 } 00:25:22.322 ] 00:25:22.322 }' 00:25:22.322 00:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.322 00:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.891 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.891 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:22.891 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:22.891 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:23.150 [2024-07-25 00:51:45.587035] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.150 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.409 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:23.409 "name": "Existed_Raid", 00:25:23.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.409 "strip_size_kb": 64, 00:25:23.409 "state": "configuring", 00:25:23.409 "raid_level": "concat", 00:25:23.409 "superblock": false, 00:25:23.409 "num_base_bdevs": 4, 00:25:23.409 "num_base_bdevs_discovered": 2, 00:25:23.409 "num_base_bdevs_operational": 4, 00:25:23.409 "base_bdevs_list": [ 00:25:23.409 { 00:25:23.409 "name": "BaseBdev1", 00:25:23.409 "uuid": "248154c2-68d6-4215-b320-193f2a1be0ec", 00:25:23.409 "is_configured": true, 00:25:23.409 "data_offset": 0, 00:25:23.409 "data_size": 65536 00:25:23.409 }, 00:25:23.409 { 00:25:23.409 "name": null, 00:25:23.409 "uuid": "de55dca9-7deb-46ce-8937-c0c6b215d80d", 00:25:23.409 "is_configured": false, 00:25:23.409 "data_offset": 0, 00:25:23.409 "data_size": 65536 00:25:23.409 }, 00:25:23.409 { 00:25:23.409 "name": null, 00:25:23.409 "uuid": "11a55391-a187-4087-a898-7f3d2c312ea4", 00:25:23.409 "is_configured": false, 00:25:23.409 "data_offset": 0, 00:25:23.409 "data_size": 65536 00:25:23.409 }, 00:25:23.409 { 00:25:23.409 "name": "BaseBdev4", 00:25:23.409 "uuid": "9d29f162-1a47-4af4-a65e-4d8854b6498e", 00:25:23.409 "is_configured": true, 00:25:23.409 "data_offset": 0, 00:25:23.409 "data_size": 65536 00:25:23.409 } 00:25:23.409 ] 00:25:23.409 }' 00:25:23.409 00:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:23.409 00:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.976 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:23.976 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.976 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:23.976 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:24.235 [2024-07-25 00:51:46.873287] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.494 00:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.494 00:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:24.494 "name": "Existed_Raid", 00:25:24.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.494 "strip_size_kb": 64, 00:25:24.494 "state": "configuring", 00:25:24.494 "raid_level": "concat", 00:25:24.494 "superblock": false, 00:25:24.494 "num_base_bdevs": 4, 00:25:24.494 "num_base_bdevs_discovered": 3, 00:25:24.494 "num_base_bdevs_operational": 4, 00:25:24.495 "base_bdevs_list": [ 00:25:24.495 { 00:25:24.495 "name": "BaseBdev1", 00:25:24.495 "uuid": "248154c2-68d6-4215-b320-193f2a1be0ec", 00:25:24.495 "is_configured": true, 00:25:24.495 "data_offset": 0, 00:25:24.495 "data_size": 65536 00:25:24.495 }, 00:25:24.495 { 00:25:24.495 "name": null, 00:25:24.495 "uuid": "de55dca9-7deb-46ce-8937-c0c6b215d80d", 00:25:24.495 "is_configured": false, 00:25:24.495 "data_offset": 0, 00:25:24.495 "data_size": 65536 00:25:24.495 }, 00:25:24.495 { 00:25:24.495 "name": "BaseBdev3", 00:25:24.495 "uuid": "11a55391-a187-4087-a898-7f3d2c312ea4", 00:25:24.495 "is_configured": true, 00:25:24.495 "data_offset": 0, 00:25:24.495 "data_size": 65536 00:25:24.495 }, 00:25:24.495 { 00:25:24.495 "name": "BaseBdev4", 00:25:24.495 "uuid": "9d29f162-1a47-4af4-a65e-4d8854b6498e", 00:25:24.495 "is_configured": true, 00:25:24.495 "data_offset": 0, 00:25:24.495 "data_size": 65536 00:25:24.495 } 00:25:24.495 ] 00:25:24.495 }' 00:25:24.495 00:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:24.495 00:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.063 00:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.063 00:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:25.321 00:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:25.321 00:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:25.578 [2024-07-25 00:51:47.973214] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.578 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.837 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.837 "name": "Existed_Raid", 00:25:25.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.837 "strip_size_kb": 64, 00:25:25.837 "state": "configuring", 00:25:25.837 "raid_level": "concat", 00:25:25.837 "superblock": false, 00:25:25.837 "num_base_bdevs": 4, 00:25:25.837 "num_base_bdevs_discovered": 2, 00:25:25.837 "num_base_bdevs_operational": 4, 00:25:25.837 "base_bdevs_list": [ 00:25:25.837 { 00:25:25.837 "name": null, 00:25:25.837 "uuid": "248154c2-68d6-4215-b320-193f2a1be0ec", 00:25:25.837 "is_configured": false, 00:25:25.837 "data_offset": 0, 00:25:25.837 "data_size": 65536 00:25:25.837 }, 00:25:25.837 { 00:25:25.837 "name": null, 00:25:25.837 "uuid": "de55dca9-7deb-46ce-8937-c0c6b215d80d", 00:25:25.837 "is_configured": false, 00:25:25.837 "data_offset": 0, 00:25:25.837 "data_size": 65536 00:25:25.837 }, 00:25:25.837 { 00:25:25.837 "name": "BaseBdev3", 00:25:25.837 "uuid": "11a55391-a187-4087-a898-7f3d2c312ea4", 00:25:25.837 "is_configured": true, 00:25:25.837 "data_offset": 0, 00:25:25.837 "data_size": 65536 00:25:25.837 }, 00:25:25.837 { 00:25:25.837 "name": "BaseBdev4", 00:25:25.837 "uuid": "9d29f162-1a47-4af4-a65e-4d8854b6498e", 00:25:25.837 "is_configured": true, 00:25:25.837 "data_offset": 0, 00:25:25.837 "data_size": 65536 00:25:25.837 } 00:25:25.837 ] 00:25:25.837 }' 00:25:25.837 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.837 00:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.404 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.404 00:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:26.404 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:26.404 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:26.663 [2024-07-25 00:51:49.193066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:26.663 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.921 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.921 "name": "Existed_Raid", 00:25:26.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.921 "strip_size_kb": 64, 00:25:26.921 "state": "configuring", 00:25:26.921 "raid_level": "concat", 00:25:26.921 "superblock": false, 00:25:26.921 "num_base_bdevs": 4, 00:25:26.921 "num_base_bdevs_discovered": 3, 00:25:26.921 "num_base_bdevs_operational": 4, 00:25:26.921 "base_bdevs_list": [ 00:25:26.921 { 00:25:26.921 "name": null, 00:25:26.921 "uuid": "248154c2-68d6-4215-b320-193f2a1be0ec", 00:25:26.921 "is_configured": false, 00:25:26.921 "data_offset": 0, 00:25:26.921 "data_size": 65536 00:25:26.921 }, 00:25:26.921 { 00:25:26.921 "name": "BaseBdev2", 00:25:26.921 "uuid": "de55dca9-7deb-46ce-8937-c0c6b215d80d", 00:25:26.921 "is_configured": true, 00:25:26.921 "data_offset": 0, 00:25:26.921 "data_size": 65536 00:25:26.921 }, 00:25:26.921 { 00:25:26.921 "name": "BaseBdev3", 00:25:26.921 "uuid": "11a55391-a187-4087-a898-7f3d2c312ea4", 00:25:26.921 "is_configured": true, 00:25:26.921 "data_offset": 0, 00:25:26.921 "data_size": 65536 00:25:26.921 }, 00:25:26.921 { 00:25:26.921 "name": "BaseBdev4", 00:25:26.921 "uuid": "9d29f162-1a47-4af4-a65e-4d8854b6498e", 00:25:26.921 "is_configured": true, 00:25:26.921 "data_offset": 0, 00:25:26.922 "data_size": 65536 00:25:26.922 } 00:25:26.922 ] 00:25:26.922 }' 00:25:26.922 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.922 00:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.489 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.489 00:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:27.748 00:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:27.748 00:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.748 00:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:28.007 00:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 248154c2-68d6-4215-b320-193f2a1be0ec 00:25:28.266 [2024-07-25 00:51:50.805277] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:28.266 [2024-07-25 00:51:50.805347] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:25:28.266 [2024-07-25 00:51:50.805355] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:25:28.266 [2024-07-25 00:51:50.805493] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:28.266 [2024-07-25 00:51:50.805773] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:25:28.266 [2024-07-25 00:51:50.805792] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:25:28.266 [2024-07-25 00:51:50.806003] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:28.266 NewBaseBdev 00:25:28.266 00:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:28.266 00:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:25:28.266 00:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:28.266 00:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:28.266 00:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:28.266 00:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:28.266 00:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:28.525 00:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:28.784 [ 00:25:28.784 { 00:25:28.784 "name": "NewBaseBdev", 00:25:28.784 "aliases": [ 00:25:28.784 "248154c2-68d6-4215-b320-193f2a1be0ec" 00:25:28.784 ], 00:25:28.784 "product_name": "Malloc disk", 00:25:28.784 "block_size": 512, 00:25:28.784 "num_blocks": 65536, 00:25:28.784 "uuid": "248154c2-68d6-4215-b320-193f2a1be0ec", 00:25:28.784 "assigned_rate_limits": { 00:25:28.784 "rw_ios_per_sec": 0, 00:25:28.784 "rw_mbytes_per_sec": 0, 00:25:28.784 "r_mbytes_per_sec": 0, 00:25:28.784 "w_mbytes_per_sec": 0 00:25:28.784 }, 00:25:28.784 "claimed": true, 00:25:28.784 "claim_type": "exclusive_write", 00:25:28.784 "zoned": false, 00:25:28.784 "supported_io_types": { 00:25:28.784 "read": true, 00:25:28.784 "write": true, 00:25:28.784 "unmap": true, 00:25:28.784 "flush": true, 00:25:28.784 "reset": true, 00:25:28.784 "nvme_admin": false, 00:25:28.784 "nvme_io": false, 00:25:28.784 "nvme_io_md": false, 00:25:28.784 "write_zeroes": true, 00:25:28.784 "zcopy": true, 00:25:28.784 "get_zone_info": false, 00:25:28.784 "zone_management": false, 00:25:28.784 "zone_append": false, 00:25:28.784 "compare": false, 00:25:28.784 "compare_and_write": false, 00:25:28.784 "abort": true, 00:25:28.784 "seek_hole": false, 00:25:28.784 "seek_data": false, 00:25:28.784 "copy": true, 00:25:28.784 "nvme_iov_md": false 00:25:28.784 }, 00:25:28.784 "memory_domains": [ 00:25:28.784 { 00:25:28.784 "dma_device_id": "system", 00:25:28.784 "dma_device_type": 1 00:25:28.784 }, 00:25:28.784 { 00:25:28.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.784 "dma_device_type": 2 00:25:28.784 } 00:25:28.784 ], 00:25:28.784 "driver_specific": {} 00:25:28.784 } 00:25:28.784 ] 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.784 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.043 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.043 "name": "Existed_Raid", 00:25:29.043 "uuid": "7da37bbd-c49b-44fb-89dd-592b3b78f18f", 00:25:29.043 "strip_size_kb": 64, 00:25:29.043 "state": "online", 00:25:29.043 "raid_level": "concat", 00:25:29.043 "superblock": false, 00:25:29.043 "num_base_bdevs": 4, 00:25:29.043 "num_base_bdevs_discovered": 4, 00:25:29.043 "num_base_bdevs_operational": 4, 00:25:29.043 "base_bdevs_list": [ 00:25:29.043 { 00:25:29.043 "name": "NewBaseBdev", 00:25:29.043 "uuid": "248154c2-68d6-4215-b320-193f2a1be0ec", 00:25:29.043 "is_configured": true, 00:25:29.043 "data_offset": 0, 00:25:29.043 "data_size": 65536 00:25:29.043 }, 00:25:29.043 { 00:25:29.043 "name": "BaseBdev2", 00:25:29.043 "uuid": "de55dca9-7deb-46ce-8937-c0c6b215d80d", 00:25:29.043 "is_configured": true, 00:25:29.043 "data_offset": 0, 00:25:29.043 "data_size": 65536 00:25:29.043 }, 00:25:29.043 { 00:25:29.043 "name": "BaseBdev3", 00:25:29.043 "uuid": "11a55391-a187-4087-a898-7f3d2c312ea4", 00:25:29.043 "is_configured": true, 00:25:29.043 "data_offset": 0, 00:25:29.043 "data_size": 65536 00:25:29.043 }, 00:25:29.043 { 00:25:29.043 "name": "BaseBdev4", 00:25:29.043 "uuid": "9d29f162-1a47-4af4-a65e-4d8854b6498e", 00:25:29.043 "is_configured": true, 00:25:29.043 "data_offset": 0, 00:25:29.043 "data_size": 65536 00:25:29.043 } 00:25:29.043 ] 00:25:29.043 }' 00:25:29.043 00:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.043 00:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.610 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:29.610 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:29.610 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:29.610 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:29.610 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:29.610 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:29.610 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:29.610 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:29.869 [2024-07-25 00:51:52.289843] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:29.869 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:29.869 "name": "Existed_Raid", 00:25:29.869 "aliases": [ 00:25:29.869 "7da37bbd-c49b-44fb-89dd-592b3b78f18f" 00:25:29.869 ], 00:25:29.869 "product_name": "Raid Volume", 00:25:29.869 "block_size": 512, 00:25:29.869 "num_blocks": 262144, 00:25:29.869 "uuid": "7da37bbd-c49b-44fb-89dd-592b3b78f18f", 00:25:29.869 "assigned_rate_limits": { 00:25:29.869 "rw_ios_per_sec": 0, 00:25:29.869 "rw_mbytes_per_sec": 0, 00:25:29.869 "r_mbytes_per_sec": 0, 00:25:29.869 "w_mbytes_per_sec": 0 00:25:29.869 }, 00:25:29.869 "claimed": false, 00:25:29.869 "zoned": false, 00:25:29.869 "supported_io_types": { 00:25:29.869 "read": true, 00:25:29.869 "write": true, 00:25:29.869 "unmap": true, 00:25:29.869 "flush": true, 00:25:29.869 "reset": true, 00:25:29.869 "nvme_admin": false, 00:25:29.869 "nvme_io": false, 00:25:29.869 "nvme_io_md": false, 00:25:29.869 "write_zeroes": true, 00:25:29.869 "zcopy": false, 00:25:29.869 "get_zone_info": false, 00:25:29.869 "zone_management": false, 00:25:29.869 "zone_append": false, 00:25:29.869 "compare": false, 00:25:29.869 "compare_and_write": false, 00:25:29.869 "abort": false, 00:25:29.869 "seek_hole": false, 00:25:29.869 "seek_data": false, 00:25:29.869 "copy": false, 00:25:29.869 "nvme_iov_md": false 00:25:29.869 }, 00:25:29.869 "memory_domains": [ 00:25:29.869 { 00:25:29.869 "dma_device_id": "system", 00:25:29.869 "dma_device_type": 1 00:25:29.869 }, 00:25:29.869 { 00:25:29.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.869 "dma_device_type": 2 00:25:29.869 }, 00:25:29.869 { 00:25:29.869 "dma_device_id": "system", 00:25:29.869 "dma_device_type": 1 00:25:29.869 }, 00:25:29.869 { 00:25:29.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.869 "dma_device_type": 2 00:25:29.869 }, 00:25:29.869 { 00:25:29.869 "dma_device_id": "system", 00:25:29.869 "dma_device_type": 1 00:25:29.869 }, 00:25:29.869 { 00:25:29.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.869 "dma_device_type": 2 00:25:29.869 }, 00:25:29.869 { 00:25:29.869 "dma_device_id": "system", 00:25:29.869 "dma_device_type": 1 00:25:29.869 }, 00:25:29.869 { 00:25:29.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.869 "dma_device_type": 2 00:25:29.869 } 00:25:29.869 ], 00:25:29.869 "driver_specific": { 00:25:29.869 "raid": { 00:25:29.869 "uuid": "7da37bbd-c49b-44fb-89dd-592b3b78f18f", 00:25:29.869 "strip_size_kb": 64, 00:25:29.869 "state": "online", 00:25:29.869 "raid_level": "concat", 00:25:29.869 "superblock": false, 00:25:29.869 "num_base_bdevs": 4, 00:25:29.869 "num_base_bdevs_discovered": 4, 00:25:29.869 "num_base_bdevs_operational": 4, 00:25:29.869 "base_bdevs_list": [ 00:25:29.869 { 00:25:29.869 "name": "NewBaseBdev", 00:25:29.869 "uuid": "248154c2-68d6-4215-b320-193f2a1be0ec", 00:25:29.869 "is_configured": true, 00:25:29.869 "data_offset": 0, 00:25:29.869 "data_size": 65536 00:25:29.869 }, 00:25:29.869 { 00:25:29.869 "name": "BaseBdev2", 00:25:29.869 "uuid": "de55dca9-7deb-46ce-8937-c0c6b215d80d", 00:25:29.869 "is_configured": true, 00:25:29.869 "data_offset": 0, 00:25:29.869 "data_size": 65536 00:25:29.869 }, 00:25:29.869 { 00:25:29.869 "name": "BaseBdev3", 00:25:29.869 "uuid": "11a55391-a187-4087-a898-7f3d2c312ea4", 00:25:29.869 "is_configured": true, 00:25:29.869 "data_offset": 0, 00:25:29.869 "data_size": 65536 00:25:29.869 }, 00:25:29.869 { 00:25:29.869 "name": "BaseBdev4", 00:25:29.869 "uuid": "9d29f162-1a47-4af4-a65e-4d8854b6498e", 00:25:29.869 "is_configured": true, 00:25:29.869 "data_offset": 0, 00:25:29.869 "data_size": 65536 00:25:29.869 } 00:25:29.869 ] 00:25:29.869 } 00:25:29.869 } 00:25:29.869 }' 00:25:29.869 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:29.869 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:29.869 BaseBdev2 00:25:29.869 BaseBdev3 00:25:29.869 BaseBdev4' 00:25:29.869 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:29.869 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:29.869 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:30.129 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:30.129 "name": "NewBaseBdev", 00:25:30.129 "aliases": [ 00:25:30.129 "248154c2-68d6-4215-b320-193f2a1be0ec" 00:25:30.129 ], 00:25:30.129 "product_name": "Malloc disk", 00:25:30.129 "block_size": 512, 00:25:30.129 "num_blocks": 65536, 00:25:30.129 "uuid": "248154c2-68d6-4215-b320-193f2a1be0ec", 00:25:30.129 "assigned_rate_limits": { 00:25:30.129 "rw_ios_per_sec": 0, 00:25:30.129 "rw_mbytes_per_sec": 0, 00:25:30.129 "r_mbytes_per_sec": 0, 00:25:30.129 "w_mbytes_per_sec": 0 00:25:30.129 }, 00:25:30.129 "claimed": true, 00:25:30.129 "claim_type": "exclusive_write", 00:25:30.129 "zoned": false, 00:25:30.129 "supported_io_types": { 00:25:30.129 "read": true, 00:25:30.129 "write": true, 00:25:30.129 "unmap": true, 00:25:30.129 "flush": true, 00:25:30.129 "reset": true, 00:25:30.129 "nvme_admin": false, 00:25:30.129 "nvme_io": false, 00:25:30.129 "nvme_io_md": false, 00:25:30.129 "write_zeroes": true, 00:25:30.129 "zcopy": true, 00:25:30.129 "get_zone_info": false, 00:25:30.129 "zone_management": false, 00:25:30.129 "zone_append": false, 00:25:30.129 "compare": false, 00:25:30.129 "compare_and_write": false, 00:25:30.129 "abort": true, 00:25:30.129 "seek_hole": false, 00:25:30.129 "seek_data": false, 00:25:30.129 "copy": true, 00:25:30.129 "nvme_iov_md": false 00:25:30.129 }, 00:25:30.129 "memory_domains": [ 00:25:30.129 { 00:25:30.129 "dma_device_id": "system", 00:25:30.129 "dma_device_type": 1 00:25:30.129 }, 00:25:30.129 { 00:25:30.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.129 "dma_device_type": 2 00:25:30.129 } 00:25:30.129 ], 00:25:30.129 "driver_specific": {} 00:25:30.129 }' 00:25:30.129 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:30.129 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:30.129 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:30.130 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:30.130 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:30.130 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:30.130 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:30.130 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:30.388 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:30.388 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:30.388 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:30.388 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:30.388 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:30.388 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:30.388 00:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:30.647 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:30.647 "name": "BaseBdev2", 00:25:30.647 "aliases": [ 00:25:30.647 "de55dca9-7deb-46ce-8937-c0c6b215d80d" 00:25:30.647 ], 00:25:30.647 "product_name": "Malloc disk", 00:25:30.647 "block_size": 512, 00:25:30.647 "num_blocks": 65536, 00:25:30.647 "uuid": "de55dca9-7deb-46ce-8937-c0c6b215d80d", 00:25:30.647 "assigned_rate_limits": { 00:25:30.647 "rw_ios_per_sec": 0, 00:25:30.647 "rw_mbytes_per_sec": 0, 00:25:30.647 "r_mbytes_per_sec": 0, 00:25:30.647 "w_mbytes_per_sec": 0 00:25:30.647 }, 00:25:30.647 "claimed": true, 00:25:30.647 "claim_type": "exclusive_write", 00:25:30.647 "zoned": false, 00:25:30.647 "supported_io_types": { 00:25:30.647 "read": true, 00:25:30.647 "write": true, 00:25:30.647 "unmap": true, 00:25:30.647 "flush": true, 00:25:30.647 "reset": true, 00:25:30.647 "nvme_admin": false, 00:25:30.647 "nvme_io": false, 00:25:30.647 "nvme_io_md": false, 00:25:30.647 "write_zeroes": true, 00:25:30.647 "zcopy": true, 00:25:30.647 "get_zone_info": false, 00:25:30.647 "zone_management": false, 00:25:30.647 "zone_append": false, 00:25:30.647 "compare": false, 00:25:30.647 "compare_and_write": false, 00:25:30.647 "abort": true, 00:25:30.647 "seek_hole": false, 00:25:30.647 "seek_data": false, 00:25:30.647 "copy": true, 00:25:30.647 "nvme_iov_md": false 00:25:30.647 }, 00:25:30.647 "memory_domains": [ 00:25:30.647 { 00:25:30.647 "dma_device_id": "system", 00:25:30.647 "dma_device_type": 1 00:25:30.647 }, 00:25:30.647 { 00:25:30.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.647 "dma_device_type": 2 00:25:30.647 } 00:25:30.647 ], 00:25:30.647 "driver_specific": {} 00:25:30.647 }' 00:25:30.647 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:30.647 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:30.647 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:30.647 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:30.647 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:30.926 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:30.926 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:30.926 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:30.926 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:30.926 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:30.926 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:30.926 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:30.926 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:30.926 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:30.926 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.187 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:31.187 "name": "BaseBdev3", 00:25:31.187 "aliases": [ 00:25:31.187 "11a55391-a187-4087-a898-7f3d2c312ea4" 00:25:31.187 ], 00:25:31.187 "product_name": "Malloc disk", 00:25:31.187 "block_size": 512, 00:25:31.187 "num_blocks": 65536, 00:25:31.187 "uuid": "11a55391-a187-4087-a898-7f3d2c312ea4", 00:25:31.187 "assigned_rate_limits": { 00:25:31.187 "rw_ios_per_sec": 0, 00:25:31.187 "rw_mbytes_per_sec": 0, 00:25:31.187 "r_mbytes_per_sec": 0, 00:25:31.187 "w_mbytes_per_sec": 0 00:25:31.187 }, 00:25:31.187 "claimed": true, 00:25:31.187 "claim_type": "exclusive_write", 00:25:31.187 "zoned": false, 00:25:31.187 "supported_io_types": { 00:25:31.187 "read": true, 00:25:31.187 "write": true, 00:25:31.187 "unmap": true, 00:25:31.187 "flush": true, 00:25:31.187 "reset": true, 00:25:31.187 "nvme_admin": false, 00:25:31.187 "nvme_io": false, 00:25:31.187 "nvme_io_md": false, 00:25:31.187 "write_zeroes": true, 00:25:31.187 "zcopy": true, 00:25:31.187 "get_zone_info": false, 00:25:31.187 "zone_management": false, 00:25:31.187 "zone_append": false, 00:25:31.187 "compare": false, 00:25:31.187 "compare_and_write": false, 00:25:31.187 "abort": true, 00:25:31.187 "seek_hole": false, 00:25:31.187 "seek_data": false, 00:25:31.187 "copy": true, 00:25:31.187 "nvme_iov_md": false 00:25:31.187 }, 00:25:31.187 "memory_domains": [ 00:25:31.187 { 00:25:31.187 "dma_device_id": "system", 00:25:31.187 "dma_device_type": 1 00:25:31.187 }, 00:25:31.187 { 00:25:31.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.187 "dma_device_type": 2 00:25:31.187 } 00:25:31.187 ], 00:25:31.187 "driver_specific": {} 00:25:31.187 }' 00:25:31.187 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.187 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.187 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:31.187 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.187 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.187 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:31.187 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.447 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.447 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:31.447 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.447 00:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.447 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:31.447 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:31.447 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:31.447 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.706 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:31.706 "name": "BaseBdev4", 00:25:31.706 "aliases": [ 00:25:31.706 "9d29f162-1a47-4af4-a65e-4d8854b6498e" 00:25:31.706 ], 00:25:31.706 "product_name": "Malloc disk", 00:25:31.706 "block_size": 512, 00:25:31.706 "num_blocks": 65536, 00:25:31.706 "uuid": "9d29f162-1a47-4af4-a65e-4d8854b6498e", 00:25:31.706 "assigned_rate_limits": { 00:25:31.706 "rw_ios_per_sec": 0, 00:25:31.706 "rw_mbytes_per_sec": 0, 00:25:31.706 "r_mbytes_per_sec": 0, 00:25:31.706 "w_mbytes_per_sec": 0 00:25:31.706 }, 00:25:31.706 "claimed": true, 00:25:31.706 "claim_type": "exclusive_write", 00:25:31.706 "zoned": false, 00:25:31.706 "supported_io_types": { 00:25:31.706 "read": true, 00:25:31.706 "write": true, 00:25:31.706 "unmap": true, 00:25:31.706 "flush": true, 00:25:31.706 "reset": true, 00:25:31.706 "nvme_admin": false, 00:25:31.706 "nvme_io": false, 00:25:31.706 "nvme_io_md": false, 00:25:31.706 "write_zeroes": true, 00:25:31.706 "zcopy": true, 00:25:31.706 "get_zone_info": false, 00:25:31.706 "zone_management": false, 00:25:31.706 "zone_append": false, 00:25:31.706 "compare": false, 00:25:31.706 "compare_and_write": false, 00:25:31.706 "abort": true, 00:25:31.706 "seek_hole": false, 00:25:31.706 "seek_data": false, 00:25:31.706 "copy": true, 00:25:31.706 "nvme_iov_md": false 00:25:31.706 }, 00:25:31.706 "memory_domains": [ 00:25:31.706 { 00:25:31.706 "dma_device_id": "system", 00:25:31.706 "dma_device_type": 1 00:25:31.706 }, 00:25:31.706 { 00:25:31.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.706 "dma_device_type": 2 00:25:31.706 } 00:25:31.706 ], 00:25:31.706 "driver_specific": {} 00:25:31.706 }' 00:25:31.706 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.706 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.965 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:31.965 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.965 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.965 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:31.965 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.965 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.965 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:31.965 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.965 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.223 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:32.223 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:32.481 [2024-07-25 00:51:54.876622] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:32.481 [2024-07-25 00:51:54.876650] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:32.481 [2024-07-25 00:51:54.876711] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:32.481 [2024-07-25 00:51:54.876773] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:32.481 [2024-07-25 00:51:54.876782] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:25:32.481 00:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 138057 00:25:32.481 00:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 138057 ']' 00:25:32.481 00:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 138057 00:25:32.481 00:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:25:32.481 00:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:32.481 00:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 138057 00:25:32.481 00:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:32.481 00:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:32.481 00:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 138057' 00:25:32.481 killing process with pid 138057 00:25:32.481 00:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 138057 00:25:32.481 [2024-07-25 00:51:54.918872] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:32.481 00:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 138057 00:25:32.739 [2024-07-25 00:51:55.326266] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:34.116 ************************************ 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:25:34.116 00:25:34.116 real 0m32.200s 00:25:34.116 user 0m57.934s 00:25:34.116 sys 0m4.820s 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.116 END TEST raid_state_function_test 00:25:34.116 ************************************ 00:25:34.116 00:51:56 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:25:34.116 00:51:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:34.116 00:51:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.116 00:51:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:34.116 ************************************ 00:25:34.116 START TEST raid_state_function_test_sb 00:25:34.116 ************************************ 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=139139 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 139139' 00:25:34.116 Process raid pid: 139139 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 139139 /var/tmp/spdk-raid.sock 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 139139 ']' 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:34.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:34.116 00:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:34.117 00:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:34.376 [2024-07-25 00:51:56.809888] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:25:34.376 [2024-07-25 00:51:56.811023] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.376 [2024-07-25 00:51:56.997905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.635 [2024-07-25 00:51:57.258537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.894 [2024-07-25 00:51:57.454612] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:35.153 00:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:35.153 00:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:25:35.153 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:35.412 [2024-07-25 00:51:57.905571] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:35.412 [2024-07-25 00:51:57.905829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:35.412 [2024-07-25 00:51:57.905930] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:35.412 [2024-07-25 00:51:57.905984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:35.412 [2024-07-25 00:51:57.906156] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:35.412 [2024-07-25 00:51:57.906200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:35.412 [2024-07-25 00:51:57.906226] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:35.412 [2024-07-25 00:51:57.906284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.412 00:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.672 00:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:35.672 "name": "Existed_Raid", 00:25:35.672 "uuid": "f854e2f8-d9a1-412d-a76f-80f43ff431b2", 00:25:35.672 "strip_size_kb": 64, 00:25:35.672 "state": "configuring", 00:25:35.672 "raid_level": "concat", 00:25:35.672 "superblock": true, 00:25:35.672 "num_base_bdevs": 4, 00:25:35.672 "num_base_bdevs_discovered": 0, 00:25:35.672 "num_base_bdevs_operational": 4, 00:25:35.672 "base_bdevs_list": [ 00:25:35.672 { 00:25:35.672 "name": "BaseBdev1", 00:25:35.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.672 "is_configured": false, 00:25:35.672 "data_offset": 0, 00:25:35.672 "data_size": 0 00:25:35.672 }, 00:25:35.672 { 00:25:35.672 "name": "BaseBdev2", 00:25:35.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.672 "is_configured": false, 00:25:35.672 "data_offset": 0, 00:25:35.672 "data_size": 0 00:25:35.672 }, 00:25:35.672 { 00:25:35.672 "name": "BaseBdev3", 00:25:35.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.672 "is_configured": false, 00:25:35.672 "data_offset": 0, 00:25:35.672 "data_size": 0 00:25:35.672 }, 00:25:35.672 { 00:25:35.672 "name": "BaseBdev4", 00:25:35.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.672 "is_configured": false, 00:25:35.672 "data_offset": 0, 00:25:35.672 "data_size": 0 00:25:35.672 } 00:25:35.672 ] 00:25:35.672 }' 00:25:35.672 00:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:35.672 00:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.240 00:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:36.240 [2024-07-25 00:51:58.829636] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:36.240 [2024-07-25 00:51:58.829809] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:36.240 00:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:36.499 [2024-07-25 00:51:59.009816] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:36.499 [2024-07-25 00:51:59.010060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:36.499 [2024-07-25 00:51:59.010189] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:36.499 [2024-07-25 00:51:59.010334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:36.499 [2024-07-25 00:51:59.010582] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:36.499 [2024-07-25 00:51:59.010681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:36.499 [2024-07-25 00:51:59.010909] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:36.499 [2024-07-25 00:51:59.011047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:36.499 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:36.757 [2024-07-25 00:51:59.227530] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:36.757 BaseBdev1 00:25:36.757 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:36.757 00:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:36.757 00:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:36.757 00:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:36.757 00:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:36.757 00:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:36.757 00:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:37.016 [ 00:25:37.016 { 00:25:37.016 "name": "BaseBdev1", 00:25:37.016 "aliases": [ 00:25:37.016 "e52e6932-7427-48ef-bb7f-ceeccb467054" 00:25:37.016 ], 00:25:37.016 "product_name": "Malloc disk", 00:25:37.016 "block_size": 512, 00:25:37.016 "num_blocks": 65536, 00:25:37.016 "uuid": "e52e6932-7427-48ef-bb7f-ceeccb467054", 00:25:37.016 "assigned_rate_limits": { 00:25:37.016 "rw_ios_per_sec": 0, 00:25:37.016 "rw_mbytes_per_sec": 0, 00:25:37.016 "r_mbytes_per_sec": 0, 00:25:37.016 "w_mbytes_per_sec": 0 00:25:37.016 }, 00:25:37.016 "claimed": true, 00:25:37.016 "claim_type": "exclusive_write", 00:25:37.016 "zoned": false, 00:25:37.016 "supported_io_types": { 00:25:37.016 "read": true, 00:25:37.016 "write": true, 00:25:37.016 "unmap": true, 00:25:37.016 "flush": true, 00:25:37.016 "reset": true, 00:25:37.016 "nvme_admin": false, 00:25:37.016 "nvme_io": false, 00:25:37.016 "nvme_io_md": false, 00:25:37.016 "write_zeroes": true, 00:25:37.016 "zcopy": true, 00:25:37.016 "get_zone_info": false, 00:25:37.016 "zone_management": false, 00:25:37.016 "zone_append": false, 00:25:37.016 "compare": false, 00:25:37.016 "compare_and_write": false, 00:25:37.016 "abort": true, 00:25:37.016 "seek_hole": false, 00:25:37.016 "seek_data": false, 00:25:37.016 "copy": true, 00:25:37.016 "nvme_iov_md": false 00:25:37.016 }, 00:25:37.016 "memory_domains": [ 00:25:37.016 { 00:25:37.016 "dma_device_id": "system", 00:25:37.016 "dma_device_type": 1 00:25:37.016 }, 00:25:37.016 { 00:25:37.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:37.016 "dma_device_type": 2 00:25:37.016 } 00:25:37.016 ], 00:25:37.016 "driver_specific": {} 00:25:37.016 } 00:25:37.016 ] 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.016 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.275 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:37.275 "name": "Existed_Raid", 00:25:37.275 "uuid": "ab46f42b-a451-477c-9e91-7826db1ba8b9", 00:25:37.275 "strip_size_kb": 64, 00:25:37.275 "state": "configuring", 00:25:37.275 "raid_level": "concat", 00:25:37.275 "superblock": true, 00:25:37.275 "num_base_bdevs": 4, 00:25:37.275 "num_base_bdevs_discovered": 1, 00:25:37.275 "num_base_bdevs_operational": 4, 00:25:37.275 "base_bdevs_list": [ 00:25:37.275 { 00:25:37.275 "name": "BaseBdev1", 00:25:37.275 "uuid": "e52e6932-7427-48ef-bb7f-ceeccb467054", 00:25:37.275 "is_configured": true, 00:25:37.275 "data_offset": 2048, 00:25:37.275 "data_size": 63488 00:25:37.275 }, 00:25:37.275 { 00:25:37.275 "name": "BaseBdev2", 00:25:37.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.275 "is_configured": false, 00:25:37.275 "data_offset": 0, 00:25:37.275 "data_size": 0 00:25:37.275 }, 00:25:37.275 { 00:25:37.275 "name": "BaseBdev3", 00:25:37.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.275 "is_configured": false, 00:25:37.275 "data_offset": 0, 00:25:37.275 "data_size": 0 00:25:37.275 }, 00:25:37.275 { 00:25:37.275 "name": "BaseBdev4", 00:25:37.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.275 "is_configured": false, 00:25:37.275 "data_offset": 0, 00:25:37.275 "data_size": 0 00:25:37.275 } 00:25:37.275 ] 00:25:37.275 }' 00:25:37.275 00:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:37.275 00:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:37.843 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:37.843 [2024-07-25 00:52:00.415783] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:37.843 [2024-07-25 00:52:00.415999] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:25:37.843 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:38.102 [2024-07-25 00:52:00.575840] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:38.102 [2024-07-25 00:52:00.577911] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:38.102 [2024-07-25 00:52:00.578080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:38.102 [2024-07-25 00:52:00.578173] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:38.102 [2024-07-25 00:52:00.578240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:38.102 [2024-07-25 00:52:00.578397] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:38.102 [2024-07-25 00:52:00.578444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.102 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.361 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:38.361 "name": "Existed_Raid", 00:25:38.361 "uuid": "da15cb93-dfd3-4cfc-b97b-1870dbedb99e", 00:25:38.361 "strip_size_kb": 64, 00:25:38.361 "state": "configuring", 00:25:38.361 "raid_level": "concat", 00:25:38.361 "superblock": true, 00:25:38.361 "num_base_bdevs": 4, 00:25:38.361 "num_base_bdevs_discovered": 1, 00:25:38.361 "num_base_bdevs_operational": 4, 00:25:38.361 "base_bdevs_list": [ 00:25:38.361 { 00:25:38.361 "name": "BaseBdev1", 00:25:38.361 "uuid": "e52e6932-7427-48ef-bb7f-ceeccb467054", 00:25:38.361 "is_configured": true, 00:25:38.361 "data_offset": 2048, 00:25:38.361 "data_size": 63488 00:25:38.361 }, 00:25:38.361 { 00:25:38.361 "name": "BaseBdev2", 00:25:38.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.361 "is_configured": false, 00:25:38.361 "data_offset": 0, 00:25:38.361 "data_size": 0 00:25:38.361 }, 00:25:38.361 { 00:25:38.361 "name": "BaseBdev3", 00:25:38.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.361 "is_configured": false, 00:25:38.361 "data_offset": 0, 00:25:38.361 "data_size": 0 00:25:38.362 }, 00:25:38.362 { 00:25:38.362 "name": "BaseBdev4", 00:25:38.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.362 "is_configured": false, 00:25:38.362 "data_offset": 0, 00:25:38.362 "data_size": 0 00:25:38.362 } 00:25:38.362 ] 00:25:38.362 }' 00:25:38.362 00:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:38.362 00:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:38.929 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:38.929 [2024-07-25 00:52:01.490488] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:38.929 BaseBdev2 00:25:38.929 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:38.929 00:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:38.929 00:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:38.929 00:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:38.929 00:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:38.929 00:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:38.929 00:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:39.187 00:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:39.446 [ 00:25:39.446 { 00:25:39.446 "name": "BaseBdev2", 00:25:39.446 "aliases": [ 00:25:39.446 "76ebc72a-d442-4e43-a6d3-504ea9ca3cb8" 00:25:39.446 ], 00:25:39.446 "product_name": "Malloc disk", 00:25:39.446 "block_size": 512, 00:25:39.446 "num_blocks": 65536, 00:25:39.446 "uuid": "76ebc72a-d442-4e43-a6d3-504ea9ca3cb8", 00:25:39.446 "assigned_rate_limits": { 00:25:39.446 "rw_ios_per_sec": 0, 00:25:39.446 "rw_mbytes_per_sec": 0, 00:25:39.446 "r_mbytes_per_sec": 0, 00:25:39.446 "w_mbytes_per_sec": 0 00:25:39.446 }, 00:25:39.446 "claimed": true, 00:25:39.446 "claim_type": "exclusive_write", 00:25:39.446 "zoned": false, 00:25:39.446 "supported_io_types": { 00:25:39.446 "read": true, 00:25:39.446 "write": true, 00:25:39.446 "unmap": true, 00:25:39.446 "flush": true, 00:25:39.446 "reset": true, 00:25:39.446 "nvme_admin": false, 00:25:39.446 "nvme_io": false, 00:25:39.446 "nvme_io_md": false, 00:25:39.446 "write_zeroes": true, 00:25:39.446 "zcopy": true, 00:25:39.446 "get_zone_info": false, 00:25:39.446 "zone_management": false, 00:25:39.446 "zone_append": false, 00:25:39.446 "compare": false, 00:25:39.446 "compare_and_write": false, 00:25:39.446 "abort": true, 00:25:39.446 "seek_hole": false, 00:25:39.446 "seek_data": false, 00:25:39.446 "copy": true, 00:25:39.446 "nvme_iov_md": false 00:25:39.446 }, 00:25:39.446 "memory_domains": [ 00:25:39.446 { 00:25:39.446 "dma_device_id": "system", 00:25:39.446 "dma_device_type": 1 00:25:39.446 }, 00:25:39.446 { 00:25:39.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.446 "dma_device_type": 2 00:25:39.446 } 00:25:39.446 ], 00:25:39.446 "driver_specific": {} 00:25:39.446 } 00:25:39.446 ] 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:39.446 00:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.705 00:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:39.706 "name": "Existed_Raid", 00:25:39.706 "uuid": "da15cb93-dfd3-4cfc-b97b-1870dbedb99e", 00:25:39.706 "strip_size_kb": 64, 00:25:39.706 "state": "configuring", 00:25:39.706 "raid_level": "concat", 00:25:39.706 "superblock": true, 00:25:39.706 "num_base_bdevs": 4, 00:25:39.706 "num_base_bdevs_discovered": 2, 00:25:39.706 "num_base_bdevs_operational": 4, 00:25:39.706 "base_bdevs_list": [ 00:25:39.706 { 00:25:39.706 "name": "BaseBdev1", 00:25:39.706 "uuid": "e52e6932-7427-48ef-bb7f-ceeccb467054", 00:25:39.706 "is_configured": true, 00:25:39.706 "data_offset": 2048, 00:25:39.706 "data_size": 63488 00:25:39.706 }, 00:25:39.706 { 00:25:39.706 "name": "BaseBdev2", 00:25:39.706 "uuid": "76ebc72a-d442-4e43-a6d3-504ea9ca3cb8", 00:25:39.706 "is_configured": true, 00:25:39.706 "data_offset": 2048, 00:25:39.706 "data_size": 63488 00:25:39.706 }, 00:25:39.706 { 00:25:39.706 "name": "BaseBdev3", 00:25:39.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.706 "is_configured": false, 00:25:39.706 "data_offset": 0, 00:25:39.706 "data_size": 0 00:25:39.706 }, 00:25:39.706 { 00:25:39.706 "name": "BaseBdev4", 00:25:39.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.706 "is_configured": false, 00:25:39.706 "data_offset": 0, 00:25:39.706 "data_size": 0 00:25:39.706 } 00:25:39.706 ] 00:25:39.706 }' 00:25:39.706 00:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:39.706 00:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.272 00:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:40.272 [2024-07-25 00:52:02.921501] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:40.272 BaseBdev3 00:25:40.530 00:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:40.530 00:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:40.530 00:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:40.530 00:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:40.530 00:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:40.530 00:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:40.530 00:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:40.530 00:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:40.799 [ 00:25:40.799 { 00:25:40.799 "name": "BaseBdev3", 00:25:40.799 "aliases": [ 00:25:40.799 "44c4f37e-0eb7-4b03-8047-c0df4438e96b" 00:25:40.799 ], 00:25:40.799 "product_name": "Malloc disk", 00:25:40.799 "block_size": 512, 00:25:40.799 "num_blocks": 65536, 00:25:40.799 "uuid": "44c4f37e-0eb7-4b03-8047-c0df4438e96b", 00:25:40.799 "assigned_rate_limits": { 00:25:40.799 "rw_ios_per_sec": 0, 00:25:40.799 "rw_mbytes_per_sec": 0, 00:25:40.799 "r_mbytes_per_sec": 0, 00:25:40.799 "w_mbytes_per_sec": 0 00:25:40.799 }, 00:25:40.799 "claimed": true, 00:25:40.799 "claim_type": "exclusive_write", 00:25:40.799 "zoned": false, 00:25:40.799 "supported_io_types": { 00:25:40.799 "read": true, 00:25:40.799 "write": true, 00:25:40.799 "unmap": true, 00:25:40.799 "flush": true, 00:25:40.799 "reset": true, 00:25:40.799 "nvme_admin": false, 00:25:40.799 "nvme_io": false, 00:25:40.799 "nvme_io_md": false, 00:25:40.799 "write_zeroes": true, 00:25:40.799 "zcopy": true, 00:25:40.799 "get_zone_info": false, 00:25:40.799 "zone_management": false, 00:25:40.799 "zone_append": false, 00:25:40.799 "compare": false, 00:25:40.799 "compare_and_write": false, 00:25:40.799 "abort": true, 00:25:40.799 "seek_hole": false, 00:25:40.799 "seek_data": false, 00:25:40.799 "copy": true, 00:25:40.799 "nvme_iov_md": false 00:25:40.799 }, 00:25:40.799 "memory_domains": [ 00:25:40.799 { 00:25:40.799 "dma_device_id": "system", 00:25:40.799 "dma_device_type": 1 00:25:40.799 }, 00:25:40.799 { 00:25:40.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.799 "dma_device_type": 2 00:25:40.799 } 00:25:40.799 ], 00:25:40.799 "driver_specific": {} 00:25:40.799 } 00:25:40.799 ] 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:40.799 "name": "Existed_Raid", 00:25:40.799 "uuid": "da15cb93-dfd3-4cfc-b97b-1870dbedb99e", 00:25:40.799 "strip_size_kb": 64, 00:25:40.799 "state": "configuring", 00:25:40.799 "raid_level": "concat", 00:25:40.799 "superblock": true, 00:25:40.799 "num_base_bdevs": 4, 00:25:40.799 "num_base_bdevs_discovered": 3, 00:25:40.799 "num_base_bdevs_operational": 4, 00:25:40.799 "base_bdevs_list": [ 00:25:40.799 { 00:25:40.799 "name": "BaseBdev1", 00:25:40.799 "uuid": "e52e6932-7427-48ef-bb7f-ceeccb467054", 00:25:40.799 "is_configured": true, 00:25:40.799 "data_offset": 2048, 00:25:40.799 "data_size": 63488 00:25:40.799 }, 00:25:40.799 { 00:25:40.799 "name": "BaseBdev2", 00:25:40.799 "uuid": "76ebc72a-d442-4e43-a6d3-504ea9ca3cb8", 00:25:40.799 "is_configured": true, 00:25:40.799 "data_offset": 2048, 00:25:40.799 "data_size": 63488 00:25:40.799 }, 00:25:40.799 { 00:25:40.799 "name": "BaseBdev3", 00:25:40.799 "uuid": "44c4f37e-0eb7-4b03-8047-c0df4438e96b", 00:25:40.799 "is_configured": true, 00:25:40.799 "data_offset": 2048, 00:25:40.799 "data_size": 63488 00:25:40.799 }, 00:25:40.799 { 00:25:40.799 "name": "BaseBdev4", 00:25:40.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.799 "is_configured": false, 00:25:40.799 "data_offset": 0, 00:25:40.799 "data_size": 0 00:25:40.799 } 00:25:40.799 ] 00:25:40.799 }' 00:25:40.799 00:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:41.057 00:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.624 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:41.624 [2024-07-25 00:52:04.260288] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:41.624 [2024-07-25 00:52:04.260741] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:25:41.624 [2024-07-25 00:52:04.260879] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:41.624 [2024-07-25 00:52:04.261067] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:41.624 BaseBdev4 00:25:41.624 [2024-07-25 00:52:04.261442] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:25:41.624 [2024-07-25 00:52:04.261458] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:25:41.624 [2024-07-25 00:52:04.261580] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.624 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:41.624 00:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:41.883 00:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:41.883 00:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:41.883 00:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:41.883 00:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:41.883 00:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:42.142 00:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:42.142 [ 00:25:42.142 { 00:25:42.142 "name": "BaseBdev4", 00:25:42.142 "aliases": [ 00:25:42.142 "935cbed7-1128-4019-bf81-4736f254e02a" 00:25:42.142 ], 00:25:42.142 "product_name": "Malloc disk", 00:25:42.142 "block_size": 512, 00:25:42.142 "num_blocks": 65536, 00:25:42.142 "uuid": "935cbed7-1128-4019-bf81-4736f254e02a", 00:25:42.142 "assigned_rate_limits": { 00:25:42.142 "rw_ios_per_sec": 0, 00:25:42.142 "rw_mbytes_per_sec": 0, 00:25:42.142 "r_mbytes_per_sec": 0, 00:25:42.142 "w_mbytes_per_sec": 0 00:25:42.142 }, 00:25:42.142 "claimed": true, 00:25:42.142 "claim_type": "exclusive_write", 00:25:42.142 "zoned": false, 00:25:42.142 "supported_io_types": { 00:25:42.142 "read": true, 00:25:42.142 "write": true, 00:25:42.142 "unmap": true, 00:25:42.142 "flush": true, 00:25:42.142 "reset": true, 00:25:42.142 "nvme_admin": false, 00:25:42.142 "nvme_io": false, 00:25:42.142 "nvme_io_md": false, 00:25:42.142 "write_zeroes": true, 00:25:42.142 "zcopy": true, 00:25:42.142 "get_zone_info": false, 00:25:42.142 "zone_management": false, 00:25:42.142 "zone_append": false, 00:25:42.142 "compare": false, 00:25:42.142 "compare_and_write": false, 00:25:42.142 "abort": true, 00:25:42.142 "seek_hole": false, 00:25:42.142 "seek_data": false, 00:25:42.142 "copy": true, 00:25:42.142 "nvme_iov_md": false 00:25:42.142 }, 00:25:42.142 "memory_domains": [ 00:25:42.142 { 00:25:42.142 "dma_device_id": "system", 00:25:42.142 "dma_device_type": 1 00:25:42.142 }, 00:25:42.142 { 00:25:42.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.142 "dma_device_type": 2 00:25:42.142 } 00:25:42.142 ], 00:25:42.142 "driver_specific": {} 00:25:42.142 } 00:25:42.142 ] 00:25:42.142 00:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:42.142 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:42.142 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:42.142 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:42.142 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:42.142 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:42.143 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:42.143 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:42.143 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:42.143 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:42.143 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:42.143 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:42.143 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:42.143 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.143 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.402 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.402 "name": "Existed_Raid", 00:25:42.402 "uuid": "da15cb93-dfd3-4cfc-b97b-1870dbedb99e", 00:25:42.402 "strip_size_kb": 64, 00:25:42.402 "state": "online", 00:25:42.402 "raid_level": "concat", 00:25:42.402 "superblock": true, 00:25:42.402 "num_base_bdevs": 4, 00:25:42.402 "num_base_bdevs_discovered": 4, 00:25:42.402 "num_base_bdevs_operational": 4, 00:25:42.402 "base_bdevs_list": [ 00:25:42.402 { 00:25:42.402 "name": "BaseBdev1", 00:25:42.402 "uuid": "e52e6932-7427-48ef-bb7f-ceeccb467054", 00:25:42.402 "is_configured": true, 00:25:42.402 "data_offset": 2048, 00:25:42.402 "data_size": 63488 00:25:42.402 }, 00:25:42.402 { 00:25:42.402 "name": "BaseBdev2", 00:25:42.402 "uuid": "76ebc72a-d442-4e43-a6d3-504ea9ca3cb8", 00:25:42.402 "is_configured": true, 00:25:42.402 "data_offset": 2048, 00:25:42.402 "data_size": 63488 00:25:42.402 }, 00:25:42.402 { 00:25:42.402 "name": "BaseBdev3", 00:25:42.402 "uuid": "44c4f37e-0eb7-4b03-8047-c0df4438e96b", 00:25:42.402 "is_configured": true, 00:25:42.402 "data_offset": 2048, 00:25:42.402 "data_size": 63488 00:25:42.402 }, 00:25:42.402 { 00:25:42.402 "name": "BaseBdev4", 00:25:42.402 "uuid": "935cbed7-1128-4019-bf81-4736f254e02a", 00:25:42.402 "is_configured": true, 00:25:42.402 "data_offset": 2048, 00:25:42.402 "data_size": 63488 00:25:42.402 } 00:25:42.402 ] 00:25:42.402 }' 00:25:42.402 00:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.402 00:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:42.970 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:42.970 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:42.970 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:42.970 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:42.970 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:42.970 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:42.970 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:43.229 [2024-07-25 00:52:05.692841] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:43.229 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:43.229 "name": "Existed_Raid", 00:25:43.229 "aliases": [ 00:25:43.229 "da15cb93-dfd3-4cfc-b97b-1870dbedb99e" 00:25:43.229 ], 00:25:43.229 "product_name": "Raid Volume", 00:25:43.229 "block_size": 512, 00:25:43.229 "num_blocks": 253952, 00:25:43.229 "uuid": "da15cb93-dfd3-4cfc-b97b-1870dbedb99e", 00:25:43.229 "assigned_rate_limits": { 00:25:43.229 "rw_ios_per_sec": 0, 00:25:43.229 "rw_mbytes_per_sec": 0, 00:25:43.229 "r_mbytes_per_sec": 0, 00:25:43.229 "w_mbytes_per_sec": 0 00:25:43.229 }, 00:25:43.229 "claimed": false, 00:25:43.229 "zoned": false, 00:25:43.229 "supported_io_types": { 00:25:43.229 "read": true, 00:25:43.229 "write": true, 00:25:43.229 "unmap": true, 00:25:43.229 "flush": true, 00:25:43.229 "reset": true, 00:25:43.230 "nvme_admin": false, 00:25:43.230 "nvme_io": false, 00:25:43.230 "nvme_io_md": false, 00:25:43.230 "write_zeroes": true, 00:25:43.230 "zcopy": false, 00:25:43.230 "get_zone_info": false, 00:25:43.230 "zone_management": false, 00:25:43.230 "zone_append": false, 00:25:43.230 "compare": false, 00:25:43.230 "compare_and_write": false, 00:25:43.230 "abort": false, 00:25:43.230 "seek_hole": false, 00:25:43.230 "seek_data": false, 00:25:43.230 "copy": false, 00:25:43.230 "nvme_iov_md": false 00:25:43.230 }, 00:25:43.230 "memory_domains": [ 00:25:43.230 { 00:25:43.230 "dma_device_id": "system", 00:25:43.230 "dma_device_type": 1 00:25:43.230 }, 00:25:43.230 { 00:25:43.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.230 "dma_device_type": 2 00:25:43.230 }, 00:25:43.230 { 00:25:43.230 "dma_device_id": "system", 00:25:43.230 "dma_device_type": 1 00:25:43.230 }, 00:25:43.230 { 00:25:43.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.230 "dma_device_type": 2 00:25:43.230 }, 00:25:43.230 { 00:25:43.230 "dma_device_id": "system", 00:25:43.230 "dma_device_type": 1 00:25:43.230 }, 00:25:43.230 { 00:25:43.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.230 "dma_device_type": 2 00:25:43.230 }, 00:25:43.230 { 00:25:43.230 "dma_device_id": "system", 00:25:43.230 "dma_device_type": 1 00:25:43.230 }, 00:25:43.230 { 00:25:43.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.230 "dma_device_type": 2 00:25:43.230 } 00:25:43.230 ], 00:25:43.230 "driver_specific": { 00:25:43.230 "raid": { 00:25:43.230 "uuid": "da15cb93-dfd3-4cfc-b97b-1870dbedb99e", 00:25:43.230 "strip_size_kb": 64, 00:25:43.230 "state": "online", 00:25:43.230 "raid_level": "concat", 00:25:43.230 "superblock": true, 00:25:43.230 "num_base_bdevs": 4, 00:25:43.230 "num_base_bdevs_discovered": 4, 00:25:43.230 "num_base_bdevs_operational": 4, 00:25:43.230 "base_bdevs_list": [ 00:25:43.230 { 00:25:43.230 "name": "BaseBdev1", 00:25:43.230 "uuid": "e52e6932-7427-48ef-bb7f-ceeccb467054", 00:25:43.230 "is_configured": true, 00:25:43.230 "data_offset": 2048, 00:25:43.230 "data_size": 63488 00:25:43.230 }, 00:25:43.230 { 00:25:43.230 "name": "BaseBdev2", 00:25:43.230 "uuid": "76ebc72a-d442-4e43-a6d3-504ea9ca3cb8", 00:25:43.230 "is_configured": true, 00:25:43.230 "data_offset": 2048, 00:25:43.230 "data_size": 63488 00:25:43.230 }, 00:25:43.230 { 00:25:43.230 "name": "BaseBdev3", 00:25:43.230 "uuid": "44c4f37e-0eb7-4b03-8047-c0df4438e96b", 00:25:43.230 "is_configured": true, 00:25:43.230 "data_offset": 2048, 00:25:43.230 "data_size": 63488 00:25:43.230 }, 00:25:43.230 { 00:25:43.230 "name": "BaseBdev4", 00:25:43.230 "uuid": "935cbed7-1128-4019-bf81-4736f254e02a", 00:25:43.230 "is_configured": true, 00:25:43.230 "data_offset": 2048, 00:25:43.230 "data_size": 63488 00:25:43.230 } 00:25:43.230 ] 00:25:43.230 } 00:25:43.230 } 00:25:43.230 }' 00:25:43.230 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:43.230 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:43.230 BaseBdev2 00:25:43.230 BaseBdev3 00:25:43.230 BaseBdev4' 00:25:43.230 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:43.230 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:43.230 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:43.489 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:43.489 "name": "BaseBdev1", 00:25:43.489 "aliases": [ 00:25:43.489 "e52e6932-7427-48ef-bb7f-ceeccb467054" 00:25:43.489 ], 00:25:43.489 "product_name": "Malloc disk", 00:25:43.489 "block_size": 512, 00:25:43.489 "num_blocks": 65536, 00:25:43.489 "uuid": "e52e6932-7427-48ef-bb7f-ceeccb467054", 00:25:43.490 "assigned_rate_limits": { 00:25:43.490 "rw_ios_per_sec": 0, 00:25:43.490 "rw_mbytes_per_sec": 0, 00:25:43.490 "r_mbytes_per_sec": 0, 00:25:43.490 "w_mbytes_per_sec": 0 00:25:43.490 }, 00:25:43.490 "claimed": true, 00:25:43.490 "claim_type": "exclusive_write", 00:25:43.490 "zoned": false, 00:25:43.490 "supported_io_types": { 00:25:43.490 "read": true, 00:25:43.490 "write": true, 00:25:43.490 "unmap": true, 00:25:43.490 "flush": true, 00:25:43.490 "reset": true, 00:25:43.490 "nvme_admin": false, 00:25:43.490 "nvme_io": false, 00:25:43.490 "nvme_io_md": false, 00:25:43.490 "write_zeroes": true, 00:25:43.490 "zcopy": true, 00:25:43.490 "get_zone_info": false, 00:25:43.490 "zone_management": false, 00:25:43.490 "zone_append": false, 00:25:43.490 "compare": false, 00:25:43.490 "compare_and_write": false, 00:25:43.490 "abort": true, 00:25:43.490 "seek_hole": false, 00:25:43.490 "seek_data": false, 00:25:43.490 "copy": true, 00:25:43.490 "nvme_iov_md": false 00:25:43.490 }, 00:25:43.490 "memory_domains": [ 00:25:43.490 { 00:25:43.490 "dma_device_id": "system", 00:25:43.490 "dma_device_type": 1 00:25:43.490 }, 00:25:43.490 { 00:25:43.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.490 "dma_device_type": 2 00:25:43.490 } 00:25:43.490 ], 00:25:43.490 "driver_specific": {} 00:25:43.490 }' 00:25:43.490 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.490 00:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.490 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:43.490 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.490 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.749 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:43.749 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.749 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.749 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:43.749 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.749 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.749 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:43.749 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:43.749 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:43.749 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:44.008 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:44.008 "name": "BaseBdev2", 00:25:44.008 "aliases": [ 00:25:44.008 "76ebc72a-d442-4e43-a6d3-504ea9ca3cb8" 00:25:44.008 ], 00:25:44.008 "product_name": "Malloc disk", 00:25:44.008 "block_size": 512, 00:25:44.008 "num_blocks": 65536, 00:25:44.008 "uuid": "76ebc72a-d442-4e43-a6d3-504ea9ca3cb8", 00:25:44.008 "assigned_rate_limits": { 00:25:44.008 "rw_ios_per_sec": 0, 00:25:44.008 "rw_mbytes_per_sec": 0, 00:25:44.008 "r_mbytes_per_sec": 0, 00:25:44.008 "w_mbytes_per_sec": 0 00:25:44.008 }, 00:25:44.008 "claimed": true, 00:25:44.008 "claim_type": "exclusive_write", 00:25:44.008 "zoned": false, 00:25:44.008 "supported_io_types": { 00:25:44.008 "read": true, 00:25:44.008 "write": true, 00:25:44.008 "unmap": true, 00:25:44.008 "flush": true, 00:25:44.008 "reset": true, 00:25:44.008 "nvme_admin": false, 00:25:44.008 "nvme_io": false, 00:25:44.008 "nvme_io_md": false, 00:25:44.008 "write_zeroes": true, 00:25:44.008 "zcopy": true, 00:25:44.008 "get_zone_info": false, 00:25:44.008 "zone_management": false, 00:25:44.008 "zone_append": false, 00:25:44.008 "compare": false, 00:25:44.008 "compare_and_write": false, 00:25:44.008 "abort": true, 00:25:44.008 "seek_hole": false, 00:25:44.008 "seek_data": false, 00:25:44.008 "copy": true, 00:25:44.008 "nvme_iov_md": false 00:25:44.008 }, 00:25:44.009 "memory_domains": [ 00:25:44.009 { 00:25:44.009 "dma_device_id": "system", 00:25:44.009 "dma_device_type": 1 00:25:44.009 }, 00:25:44.009 { 00:25:44.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.009 "dma_device_type": 2 00:25:44.009 } 00:25:44.009 ], 00:25:44.009 "driver_specific": {} 00:25:44.009 }' 00:25:44.009 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.009 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.009 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:44.009 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.009 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.267 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:44.267 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.267 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.267 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:44.267 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.267 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.267 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:44.267 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:44.267 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:44.267 00:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:44.526 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:44.526 "name": "BaseBdev3", 00:25:44.526 "aliases": [ 00:25:44.526 "44c4f37e-0eb7-4b03-8047-c0df4438e96b" 00:25:44.526 ], 00:25:44.526 "product_name": "Malloc disk", 00:25:44.526 "block_size": 512, 00:25:44.526 "num_blocks": 65536, 00:25:44.526 "uuid": "44c4f37e-0eb7-4b03-8047-c0df4438e96b", 00:25:44.526 "assigned_rate_limits": { 00:25:44.526 "rw_ios_per_sec": 0, 00:25:44.526 "rw_mbytes_per_sec": 0, 00:25:44.526 "r_mbytes_per_sec": 0, 00:25:44.527 "w_mbytes_per_sec": 0 00:25:44.527 }, 00:25:44.527 "claimed": true, 00:25:44.527 "claim_type": "exclusive_write", 00:25:44.527 "zoned": false, 00:25:44.527 "supported_io_types": { 00:25:44.527 "read": true, 00:25:44.527 "write": true, 00:25:44.527 "unmap": true, 00:25:44.527 "flush": true, 00:25:44.527 "reset": true, 00:25:44.527 "nvme_admin": false, 00:25:44.527 "nvme_io": false, 00:25:44.527 "nvme_io_md": false, 00:25:44.527 "write_zeroes": true, 00:25:44.527 "zcopy": true, 00:25:44.527 "get_zone_info": false, 00:25:44.527 "zone_management": false, 00:25:44.527 "zone_append": false, 00:25:44.527 "compare": false, 00:25:44.527 "compare_and_write": false, 00:25:44.527 "abort": true, 00:25:44.527 "seek_hole": false, 00:25:44.527 "seek_data": false, 00:25:44.527 "copy": true, 00:25:44.527 "nvme_iov_md": false 00:25:44.527 }, 00:25:44.527 "memory_domains": [ 00:25:44.527 { 00:25:44.527 "dma_device_id": "system", 00:25:44.527 "dma_device_type": 1 00:25:44.527 }, 00:25:44.527 { 00:25:44.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.527 "dma_device_type": 2 00:25:44.527 } 00:25:44.527 ], 00:25:44.527 "driver_specific": {} 00:25:44.527 }' 00:25:44.527 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.786 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.786 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:44.786 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.786 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.786 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:44.786 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.786 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.044 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:45.044 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.044 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.044 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:45.044 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:45.044 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:45.044 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:45.327 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:45.327 "name": "BaseBdev4", 00:25:45.327 "aliases": [ 00:25:45.327 "935cbed7-1128-4019-bf81-4736f254e02a" 00:25:45.327 ], 00:25:45.327 "product_name": "Malloc disk", 00:25:45.327 "block_size": 512, 00:25:45.327 "num_blocks": 65536, 00:25:45.327 "uuid": "935cbed7-1128-4019-bf81-4736f254e02a", 00:25:45.327 "assigned_rate_limits": { 00:25:45.327 "rw_ios_per_sec": 0, 00:25:45.327 "rw_mbytes_per_sec": 0, 00:25:45.327 "r_mbytes_per_sec": 0, 00:25:45.327 "w_mbytes_per_sec": 0 00:25:45.327 }, 00:25:45.327 "claimed": true, 00:25:45.327 "claim_type": "exclusive_write", 00:25:45.327 "zoned": false, 00:25:45.327 "supported_io_types": { 00:25:45.327 "read": true, 00:25:45.327 "write": true, 00:25:45.327 "unmap": true, 00:25:45.327 "flush": true, 00:25:45.327 "reset": true, 00:25:45.327 "nvme_admin": false, 00:25:45.327 "nvme_io": false, 00:25:45.327 "nvme_io_md": false, 00:25:45.327 "write_zeroes": true, 00:25:45.327 "zcopy": true, 00:25:45.327 "get_zone_info": false, 00:25:45.327 "zone_management": false, 00:25:45.327 "zone_append": false, 00:25:45.327 "compare": false, 00:25:45.327 "compare_and_write": false, 00:25:45.327 "abort": true, 00:25:45.327 "seek_hole": false, 00:25:45.327 "seek_data": false, 00:25:45.327 "copy": true, 00:25:45.327 "nvme_iov_md": false 00:25:45.327 }, 00:25:45.327 "memory_domains": [ 00:25:45.327 { 00:25:45.327 "dma_device_id": "system", 00:25:45.327 "dma_device_type": 1 00:25:45.327 }, 00:25:45.327 { 00:25:45.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.327 "dma_device_type": 2 00:25:45.327 } 00:25:45.327 ], 00:25:45.327 "driver_specific": {} 00:25:45.327 }' 00:25:45.327 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.327 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.327 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:45.327 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.327 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.587 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:45.587 00:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.587 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.587 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:45.587 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.587 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.587 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:45.587 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:45.846 [2024-07-25 00:52:08.437110] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:45.846 [2024-07-25 00:52:08.437294] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:45.846 [2024-07-25 00:52:08.437522] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.105 "name": "Existed_Raid", 00:25:46.105 "uuid": "da15cb93-dfd3-4cfc-b97b-1870dbedb99e", 00:25:46.105 "strip_size_kb": 64, 00:25:46.105 "state": "offline", 00:25:46.105 "raid_level": "concat", 00:25:46.105 "superblock": true, 00:25:46.105 "num_base_bdevs": 4, 00:25:46.105 "num_base_bdevs_discovered": 3, 00:25:46.105 "num_base_bdevs_operational": 3, 00:25:46.105 "base_bdevs_list": [ 00:25:46.105 { 00:25:46.105 "name": null, 00:25:46.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.105 "is_configured": false, 00:25:46.105 "data_offset": 2048, 00:25:46.105 "data_size": 63488 00:25:46.105 }, 00:25:46.105 { 00:25:46.105 "name": "BaseBdev2", 00:25:46.105 "uuid": "76ebc72a-d442-4e43-a6d3-504ea9ca3cb8", 00:25:46.105 "is_configured": true, 00:25:46.105 "data_offset": 2048, 00:25:46.105 "data_size": 63488 00:25:46.105 }, 00:25:46.105 { 00:25:46.105 "name": "BaseBdev3", 00:25:46.105 "uuid": "44c4f37e-0eb7-4b03-8047-c0df4438e96b", 00:25:46.105 "is_configured": true, 00:25:46.105 "data_offset": 2048, 00:25:46.105 "data_size": 63488 00:25:46.105 }, 00:25:46.105 { 00:25:46.105 "name": "BaseBdev4", 00:25:46.105 "uuid": "935cbed7-1128-4019-bf81-4736f254e02a", 00:25:46.105 "is_configured": true, 00:25:46.105 "data_offset": 2048, 00:25:46.105 "data_size": 63488 00:25:46.105 } 00:25:46.105 ] 00:25:46.105 }' 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.105 00:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.041 00:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:47.041 00:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:47.041 00:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.041 00:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:47.041 00:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:47.041 00:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:47.041 00:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:47.300 [2024-07-25 00:52:09.769622] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:47.300 00:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:47.300 00:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:47.301 00:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.301 00:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:47.560 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:47.560 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:47.560 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:47.817 [2024-07-25 00:52:10.383377] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:48.075 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:48.075 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:48.075 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.075 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:48.075 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:48.075 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:48.075 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:48.334 [2024-07-25 00:52:10.840139] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:48.334 [2024-07-25 00:52:10.840317] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:25:48.334 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:48.334 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:48.334 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.334 00:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:48.593 00:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:48.593 00:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:48.593 00:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:48.593 00:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:48.593 00:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:48.593 00:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:48.852 BaseBdev2 00:25:48.853 00:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:48.853 00:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:48.853 00:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:48.853 00:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:48.853 00:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:48.853 00:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:48.853 00:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:49.112 00:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:49.378 [ 00:25:49.379 { 00:25:49.379 "name": "BaseBdev2", 00:25:49.379 "aliases": [ 00:25:49.379 "a98640d4-49ab-4df8-a1e0-d39585519422" 00:25:49.379 ], 00:25:49.379 "product_name": "Malloc disk", 00:25:49.379 "block_size": 512, 00:25:49.379 "num_blocks": 65536, 00:25:49.379 "uuid": "a98640d4-49ab-4df8-a1e0-d39585519422", 00:25:49.379 "assigned_rate_limits": { 00:25:49.379 "rw_ios_per_sec": 0, 00:25:49.379 "rw_mbytes_per_sec": 0, 00:25:49.379 "r_mbytes_per_sec": 0, 00:25:49.379 "w_mbytes_per_sec": 0 00:25:49.379 }, 00:25:49.379 "claimed": false, 00:25:49.379 "zoned": false, 00:25:49.379 "supported_io_types": { 00:25:49.379 "read": true, 00:25:49.379 "write": true, 00:25:49.379 "unmap": true, 00:25:49.379 "flush": true, 00:25:49.379 "reset": true, 00:25:49.379 "nvme_admin": false, 00:25:49.379 "nvme_io": false, 00:25:49.379 "nvme_io_md": false, 00:25:49.379 "write_zeroes": true, 00:25:49.379 "zcopy": true, 00:25:49.379 "get_zone_info": false, 00:25:49.379 "zone_management": false, 00:25:49.379 "zone_append": false, 00:25:49.379 "compare": false, 00:25:49.379 "compare_and_write": false, 00:25:49.379 "abort": true, 00:25:49.379 "seek_hole": false, 00:25:49.379 "seek_data": false, 00:25:49.379 "copy": true, 00:25:49.379 "nvme_iov_md": false 00:25:49.379 }, 00:25:49.379 "memory_domains": [ 00:25:49.379 { 00:25:49.379 "dma_device_id": "system", 00:25:49.379 "dma_device_type": 1 00:25:49.379 }, 00:25:49.379 { 00:25:49.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.379 "dma_device_type": 2 00:25:49.379 } 00:25:49.379 ], 00:25:49.379 "driver_specific": {} 00:25:49.379 } 00:25:49.379 ] 00:25:49.379 00:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:49.379 00:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:49.379 00:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:49.379 00:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:49.638 BaseBdev3 00:25:49.638 00:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:49.638 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:49.638 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:49.638 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:49.638 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:49.638 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:49.638 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:49.639 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:49.896 [ 00:25:49.896 { 00:25:49.896 "name": "BaseBdev3", 00:25:49.896 "aliases": [ 00:25:49.896 "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01" 00:25:49.896 ], 00:25:49.896 "product_name": "Malloc disk", 00:25:49.896 "block_size": 512, 00:25:49.896 "num_blocks": 65536, 00:25:49.896 "uuid": "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01", 00:25:49.896 "assigned_rate_limits": { 00:25:49.896 "rw_ios_per_sec": 0, 00:25:49.896 "rw_mbytes_per_sec": 0, 00:25:49.896 "r_mbytes_per_sec": 0, 00:25:49.896 "w_mbytes_per_sec": 0 00:25:49.896 }, 00:25:49.896 "claimed": false, 00:25:49.896 "zoned": false, 00:25:49.896 "supported_io_types": { 00:25:49.896 "read": true, 00:25:49.896 "write": true, 00:25:49.896 "unmap": true, 00:25:49.896 "flush": true, 00:25:49.896 "reset": true, 00:25:49.896 "nvme_admin": false, 00:25:49.896 "nvme_io": false, 00:25:49.896 "nvme_io_md": false, 00:25:49.896 "write_zeroes": true, 00:25:49.896 "zcopy": true, 00:25:49.896 "get_zone_info": false, 00:25:49.896 "zone_management": false, 00:25:49.896 "zone_append": false, 00:25:49.896 "compare": false, 00:25:49.896 "compare_and_write": false, 00:25:49.896 "abort": true, 00:25:49.896 "seek_hole": false, 00:25:49.896 "seek_data": false, 00:25:49.896 "copy": true, 00:25:49.896 "nvme_iov_md": false 00:25:49.896 }, 00:25:49.896 "memory_domains": [ 00:25:49.896 { 00:25:49.896 "dma_device_id": "system", 00:25:49.896 "dma_device_type": 1 00:25:49.896 }, 00:25:49.896 { 00:25:49.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.896 "dma_device_type": 2 00:25:49.896 } 00:25:49.896 ], 00:25:49.896 "driver_specific": {} 00:25:49.896 } 00:25:49.896 ] 00:25:49.896 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:49.896 00:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:49.896 00:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:49.896 00:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:50.155 BaseBdev4 00:25:50.155 00:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:50.155 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:50.155 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:50.155 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:50.155 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:50.155 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:50.155 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:50.414 00:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:50.673 [ 00:25:50.673 { 00:25:50.673 "name": "BaseBdev4", 00:25:50.673 "aliases": [ 00:25:50.673 "7bdbc9c0-130a-437d-bc43-d16930c87908" 00:25:50.673 ], 00:25:50.673 "product_name": "Malloc disk", 00:25:50.673 "block_size": 512, 00:25:50.673 "num_blocks": 65536, 00:25:50.673 "uuid": "7bdbc9c0-130a-437d-bc43-d16930c87908", 00:25:50.673 "assigned_rate_limits": { 00:25:50.673 "rw_ios_per_sec": 0, 00:25:50.673 "rw_mbytes_per_sec": 0, 00:25:50.673 "r_mbytes_per_sec": 0, 00:25:50.673 "w_mbytes_per_sec": 0 00:25:50.673 }, 00:25:50.673 "claimed": false, 00:25:50.673 "zoned": false, 00:25:50.673 "supported_io_types": { 00:25:50.673 "read": true, 00:25:50.673 "write": true, 00:25:50.673 "unmap": true, 00:25:50.673 "flush": true, 00:25:50.673 "reset": true, 00:25:50.673 "nvme_admin": false, 00:25:50.673 "nvme_io": false, 00:25:50.673 "nvme_io_md": false, 00:25:50.673 "write_zeroes": true, 00:25:50.673 "zcopy": true, 00:25:50.673 "get_zone_info": false, 00:25:50.673 "zone_management": false, 00:25:50.673 "zone_append": false, 00:25:50.673 "compare": false, 00:25:50.673 "compare_and_write": false, 00:25:50.673 "abort": true, 00:25:50.673 "seek_hole": false, 00:25:50.673 "seek_data": false, 00:25:50.673 "copy": true, 00:25:50.673 "nvme_iov_md": false 00:25:50.673 }, 00:25:50.673 "memory_domains": [ 00:25:50.673 { 00:25:50.673 "dma_device_id": "system", 00:25:50.673 "dma_device_type": 1 00:25:50.673 }, 00:25:50.673 { 00:25:50.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.673 "dma_device_type": 2 00:25:50.673 } 00:25:50.673 ], 00:25:50.673 "driver_specific": {} 00:25:50.673 } 00:25:50.673 ] 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:50.673 [2024-07-25 00:52:13.253214] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:50.673 [2024-07-25 00:52:13.253459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:50.673 [2024-07-25 00:52:13.253575] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:50.673 [2024-07-25 00:52:13.255523] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:50.673 [2024-07-25 00:52:13.255696] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.673 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.932 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:50.932 "name": "Existed_Raid", 00:25:50.932 "uuid": "d2b8e0a4-9acf-47d3-84ae-333de237e3fc", 00:25:50.932 "strip_size_kb": 64, 00:25:50.932 "state": "configuring", 00:25:50.932 "raid_level": "concat", 00:25:50.932 "superblock": true, 00:25:50.932 "num_base_bdevs": 4, 00:25:50.932 "num_base_bdevs_discovered": 3, 00:25:50.932 "num_base_bdevs_operational": 4, 00:25:50.932 "base_bdevs_list": [ 00:25:50.932 { 00:25:50.932 "name": "BaseBdev1", 00:25:50.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.932 "is_configured": false, 00:25:50.932 "data_offset": 0, 00:25:50.932 "data_size": 0 00:25:50.932 }, 00:25:50.932 { 00:25:50.932 "name": "BaseBdev2", 00:25:50.932 "uuid": "a98640d4-49ab-4df8-a1e0-d39585519422", 00:25:50.932 "is_configured": true, 00:25:50.932 "data_offset": 2048, 00:25:50.932 "data_size": 63488 00:25:50.932 }, 00:25:50.932 { 00:25:50.932 "name": "BaseBdev3", 00:25:50.932 "uuid": "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01", 00:25:50.932 "is_configured": true, 00:25:50.932 "data_offset": 2048, 00:25:50.932 "data_size": 63488 00:25:50.932 }, 00:25:50.932 { 00:25:50.932 "name": "BaseBdev4", 00:25:50.932 "uuid": "7bdbc9c0-130a-437d-bc43-d16930c87908", 00:25:50.932 "is_configured": true, 00:25:50.932 "data_offset": 2048, 00:25:50.932 "data_size": 63488 00:25:50.932 } 00:25:50.932 ] 00:25:50.932 }' 00:25:50.932 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:50.932 00:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.500 00:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:51.759 [2024-07-25 00:52:14.181350] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.759 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.018 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:52.018 "name": "Existed_Raid", 00:25:52.018 "uuid": "d2b8e0a4-9acf-47d3-84ae-333de237e3fc", 00:25:52.018 "strip_size_kb": 64, 00:25:52.018 "state": "configuring", 00:25:52.018 "raid_level": "concat", 00:25:52.018 "superblock": true, 00:25:52.018 "num_base_bdevs": 4, 00:25:52.018 "num_base_bdevs_discovered": 2, 00:25:52.018 "num_base_bdevs_operational": 4, 00:25:52.018 "base_bdevs_list": [ 00:25:52.018 { 00:25:52.018 "name": "BaseBdev1", 00:25:52.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.018 "is_configured": false, 00:25:52.018 "data_offset": 0, 00:25:52.018 "data_size": 0 00:25:52.018 }, 00:25:52.018 { 00:25:52.018 "name": null, 00:25:52.018 "uuid": "a98640d4-49ab-4df8-a1e0-d39585519422", 00:25:52.018 "is_configured": false, 00:25:52.018 "data_offset": 2048, 00:25:52.018 "data_size": 63488 00:25:52.018 }, 00:25:52.018 { 00:25:52.018 "name": "BaseBdev3", 00:25:52.018 "uuid": "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01", 00:25:52.018 "is_configured": true, 00:25:52.018 "data_offset": 2048, 00:25:52.018 "data_size": 63488 00:25:52.018 }, 00:25:52.018 { 00:25:52.018 "name": "BaseBdev4", 00:25:52.018 "uuid": "7bdbc9c0-130a-437d-bc43-d16930c87908", 00:25:52.018 "is_configured": true, 00:25:52.018 "data_offset": 2048, 00:25:52.018 "data_size": 63488 00:25:52.018 } 00:25:52.018 ] 00:25:52.018 }' 00:25:52.018 00:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:52.018 00:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.585 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:52.585 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.585 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:52.585 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:52.843 [2024-07-25 00:52:15.460070] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:52.843 BaseBdev1 00:25:52.843 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:52.843 00:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:52.843 00:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:52.843 00:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:52.843 00:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:52.843 00:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:52.843 00:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:53.101 00:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:53.360 [ 00:25:53.360 { 00:25:53.360 "name": "BaseBdev1", 00:25:53.360 "aliases": [ 00:25:53.360 "9778e48d-1105-4480-9db9-4fe9db8d182a" 00:25:53.360 ], 00:25:53.360 "product_name": "Malloc disk", 00:25:53.360 "block_size": 512, 00:25:53.360 "num_blocks": 65536, 00:25:53.360 "uuid": "9778e48d-1105-4480-9db9-4fe9db8d182a", 00:25:53.360 "assigned_rate_limits": { 00:25:53.360 "rw_ios_per_sec": 0, 00:25:53.360 "rw_mbytes_per_sec": 0, 00:25:53.360 "r_mbytes_per_sec": 0, 00:25:53.360 "w_mbytes_per_sec": 0 00:25:53.360 }, 00:25:53.360 "claimed": true, 00:25:53.360 "claim_type": "exclusive_write", 00:25:53.360 "zoned": false, 00:25:53.360 "supported_io_types": { 00:25:53.360 "read": true, 00:25:53.360 "write": true, 00:25:53.360 "unmap": true, 00:25:53.360 "flush": true, 00:25:53.360 "reset": true, 00:25:53.360 "nvme_admin": false, 00:25:53.360 "nvme_io": false, 00:25:53.360 "nvme_io_md": false, 00:25:53.360 "write_zeroes": true, 00:25:53.360 "zcopy": true, 00:25:53.360 "get_zone_info": false, 00:25:53.360 "zone_management": false, 00:25:53.360 "zone_append": false, 00:25:53.360 "compare": false, 00:25:53.360 "compare_and_write": false, 00:25:53.360 "abort": true, 00:25:53.360 "seek_hole": false, 00:25:53.360 "seek_data": false, 00:25:53.360 "copy": true, 00:25:53.360 "nvme_iov_md": false 00:25:53.360 }, 00:25:53.360 "memory_domains": [ 00:25:53.360 { 00:25:53.360 "dma_device_id": "system", 00:25:53.360 "dma_device_type": 1 00:25:53.360 }, 00:25:53.360 { 00:25:53.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.360 "dma_device_type": 2 00:25:53.360 } 00:25:53.360 ], 00:25:53.360 "driver_specific": {} 00:25:53.360 } 00:25:53.360 ] 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.360 00:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.620 00:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.620 "name": "Existed_Raid", 00:25:53.620 "uuid": "d2b8e0a4-9acf-47d3-84ae-333de237e3fc", 00:25:53.620 "strip_size_kb": 64, 00:25:53.620 "state": "configuring", 00:25:53.620 "raid_level": "concat", 00:25:53.620 "superblock": true, 00:25:53.620 "num_base_bdevs": 4, 00:25:53.620 "num_base_bdevs_discovered": 3, 00:25:53.620 "num_base_bdevs_operational": 4, 00:25:53.620 "base_bdevs_list": [ 00:25:53.620 { 00:25:53.620 "name": "BaseBdev1", 00:25:53.620 "uuid": "9778e48d-1105-4480-9db9-4fe9db8d182a", 00:25:53.620 "is_configured": true, 00:25:53.620 "data_offset": 2048, 00:25:53.620 "data_size": 63488 00:25:53.620 }, 00:25:53.620 { 00:25:53.620 "name": null, 00:25:53.620 "uuid": "a98640d4-49ab-4df8-a1e0-d39585519422", 00:25:53.620 "is_configured": false, 00:25:53.620 "data_offset": 2048, 00:25:53.620 "data_size": 63488 00:25:53.620 }, 00:25:53.620 { 00:25:53.620 "name": "BaseBdev3", 00:25:53.620 "uuid": "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01", 00:25:53.620 "is_configured": true, 00:25:53.620 "data_offset": 2048, 00:25:53.620 "data_size": 63488 00:25:53.620 }, 00:25:53.620 { 00:25:53.620 "name": "BaseBdev4", 00:25:53.620 "uuid": "7bdbc9c0-130a-437d-bc43-d16930c87908", 00:25:53.620 "is_configured": true, 00:25:53.620 "data_offset": 2048, 00:25:53.620 "data_size": 63488 00:25:53.620 } 00:25:53.620 ] 00:25:53.620 }' 00:25:53.620 00:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.620 00:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.187 00:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.187 00:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:54.445 00:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:54.445 00:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:54.704 [2024-07-25 00:52:17.188409] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.704 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.962 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:54.962 "name": "Existed_Raid", 00:25:54.962 "uuid": "d2b8e0a4-9acf-47d3-84ae-333de237e3fc", 00:25:54.962 "strip_size_kb": 64, 00:25:54.962 "state": "configuring", 00:25:54.962 "raid_level": "concat", 00:25:54.962 "superblock": true, 00:25:54.962 "num_base_bdevs": 4, 00:25:54.962 "num_base_bdevs_discovered": 2, 00:25:54.962 "num_base_bdevs_operational": 4, 00:25:54.962 "base_bdevs_list": [ 00:25:54.962 { 00:25:54.962 "name": "BaseBdev1", 00:25:54.962 "uuid": "9778e48d-1105-4480-9db9-4fe9db8d182a", 00:25:54.962 "is_configured": true, 00:25:54.962 "data_offset": 2048, 00:25:54.962 "data_size": 63488 00:25:54.962 }, 00:25:54.962 { 00:25:54.962 "name": null, 00:25:54.962 "uuid": "a98640d4-49ab-4df8-a1e0-d39585519422", 00:25:54.962 "is_configured": false, 00:25:54.962 "data_offset": 2048, 00:25:54.962 "data_size": 63488 00:25:54.962 }, 00:25:54.962 { 00:25:54.962 "name": null, 00:25:54.962 "uuid": "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01", 00:25:54.962 "is_configured": false, 00:25:54.962 "data_offset": 2048, 00:25:54.962 "data_size": 63488 00:25:54.962 }, 00:25:54.962 { 00:25:54.962 "name": "BaseBdev4", 00:25:54.962 "uuid": "7bdbc9c0-130a-437d-bc43-d16930c87908", 00:25:54.962 "is_configured": true, 00:25:54.962 "data_offset": 2048, 00:25:54.962 "data_size": 63488 00:25:54.962 } 00:25:54.962 ] 00:25:54.962 }' 00:25:54.962 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:54.962 00:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.529 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:55.529 00:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.787 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:55.787 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:56.045 [2024-07-25 00:52:18.484692] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.045 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.304 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.304 "name": "Existed_Raid", 00:25:56.304 "uuid": "d2b8e0a4-9acf-47d3-84ae-333de237e3fc", 00:25:56.304 "strip_size_kb": 64, 00:25:56.304 "state": "configuring", 00:25:56.304 "raid_level": "concat", 00:25:56.304 "superblock": true, 00:25:56.304 "num_base_bdevs": 4, 00:25:56.304 "num_base_bdevs_discovered": 3, 00:25:56.304 "num_base_bdevs_operational": 4, 00:25:56.304 "base_bdevs_list": [ 00:25:56.304 { 00:25:56.304 "name": "BaseBdev1", 00:25:56.304 "uuid": "9778e48d-1105-4480-9db9-4fe9db8d182a", 00:25:56.304 "is_configured": true, 00:25:56.304 "data_offset": 2048, 00:25:56.304 "data_size": 63488 00:25:56.304 }, 00:25:56.304 { 00:25:56.304 "name": null, 00:25:56.304 "uuid": "a98640d4-49ab-4df8-a1e0-d39585519422", 00:25:56.304 "is_configured": false, 00:25:56.304 "data_offset": 2048, 00:25:56.304 "data_size": 63488 00:25:56.304 }, 00:25:56.304 { 00:25:56.304 "name": "BaseBdev3", 00:25:56.304 "uuid": "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01", 00:25:56.304 "is_configured": true, 00:25:56.304 "data_offset": 2048, 00:25:56.304 "data_size": 63488 00:25:56.304 }, 00:25:56.304 { 00:25:56.304 "name": "BaseBdev4", 00:25:56.304 "uuid": "7bdbc9c0-130a-437d-bc43-d16930c87908", 00:25:56.304 "is_configured": true, 00:25:56.304 "data_offset": 2048, 00:25:56.304 "data_size": 63488 00:25:56.304 } 00:25:56.304 ] 00:25:56.304 }' 00:25:56.304 00:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.304 00:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.564 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.564 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:56.822 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:56.822 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:57.082 [2024-07-25 00:52:19.672913] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.341 "name": "Existed_Raid", 00:25:57.341 "uuid": "d2b8e0a4-9acf-47d3-84ae-333de237e3fc", 00:25:57.341 "strip_size_kb": 64, 00:25:57.341 "state": "configuring", 00:25:57.341 "raid_level": "concat", 00:25:57.341 "superblock": true, 00:25:57.341 "num_base_bdevs": 4, 00:25:57.341 "num_base_bdevs_discovered": 2, 00:25:57.341 "num_base_bdevs_operational": 4, 00:25:57.341 "base_bdevs_list": [ 00:25:57.341 { 00:25:57.341 "name": null, 00:25:57.341 "uuid": "9778e48d-1105-4480-9db9-4fe9db8d182a", 00:25:57.341 "is_configured": false, 00:25:57.341 "data_offset": 2048, 00:25:57.341 "data_size": 63488 00:25:57.341 }, 00:25:57.341 { 00:25:57.341 "name": null, 00:25:57.341 "uuid": "a98640d4-49ab-4df8-a1e0-d39585519422", 00:25:57.341 "is_configured": false, 00:25:57.341 "data_offset": 2048, 00:25:57.341 "data_size": 63488 00:25:57.341 }, 00:25:57.341 { 00:25:57.341 "name": "BaseBdev3", 00:25:57.341 "uuid": "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01", 00:25:57.341 "is_configured": true, 00:25:57.341 "data_offset": 2048, 00:25:57.341 "data_size": 63488 00:25:57.341 }, 00:25:57.341 { 00:25:57.341 "name": "BaseBdev4", 00:25:57.341 "uuid": "7bdbc9c0-130a-437d-bc43-d16930c87908", 00:25:57.341 "is_configured": true, 00:25:57.341 "data_offset": 2048, 00:25:57.341 "data_size": 63488 00:25:57.341 } 00:25:57.341 ] 00:25:57.341 }' 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.341 00:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.910 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.910 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:58.169 [2024-07-25 00:52:20.752339] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.169 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.170 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.170 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.430 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.430 "name": "Existed_Raid", 00:25:58.431 "uuid": "d2b8e0a4-9acf-47d3-84ae-333de237e3fc", 00:25:58.431 "strip_size_kb": 64, 00:25:58.431 "state": "configuring", 00:25:58.431 "raid_level": "concat", 00:25:58.431 "superblock": true, 00:25:58.431 "num_base_bdevs": 4, 00:25:58.431 "num_base_bdevs_discovered": 3, 00:25:58.431 "num_base_bdevs_operational": 4, 00:25:58.431 "base_bdevs_list": [ 00:25:58.431 { 00:25:58.431 "name": null, 00:25:58.431 "uuid": "9778e48d-1105-4480-9db9-4fe9db8d182a", 00:25:58.431 "is_configured": false, 00:25:58.431 "data_offset": 2048, 00:25:58.431 "data_size": 63488 00:25:58.431 }, 00:25:58.431 { 00:25:58.431 "name": "BaseBdev2", 00:25:58.431 "uuid": "a98640d4-49ab-4df8-a1e0-d39585519422", 00:25:58.431 "is_configured": true, 00:25:58.431 "data_offset": 2048, 00:25:58.431 "data_size": 63488 00:25:58.431 }, 00:25:58.431 { 00:25:58.431 "name": "BaseBdev3", 00:25:58.431 "uuid": "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01", 00:25:58.431 "is_configured": true, 00:25:58.431 "data_offset": 2048, 00:25:58.431 "data_size": 63488 00:25:58.431 }, 00:25:58.431 { 00:25:58.431 "name": "BaseBdev4", 00:25:58.431 "uuid": "7bdbc9c0-130a-437d-bc43-d16930c87908", 00:25:58.431 "is_configured": true, 00:25:58.431 "data_offset": 2048, 00:25:58.431 "data_size": 63488 00:25:58.431 } 00:25:58.431 ] 00:25:58.431 }' 00:25:58.431 00:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.431 00:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.002 00:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:59.002 00:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.002 00:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:59.002 00:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.002 00:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:59.261 00:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9778e48d-1105-4480-9db9-4fe9db8d182a 00:25:59.520 [2024-07-25 00:52:22.083234] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:59.520 [2024-07-25 00:52:22.083705] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:25:59.520 [2024-07-25 00:52:22.083819] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:59.520 [2024-07-25 00:52:22.083964] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:59.520 [2024-07-25 00:52:22.084335] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:25:59.520 [2024-07-25 00:52:22.084377] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:25:59.520 NewBaseBdev 00:25:59.520 [2024-07-25 00:52:22.084602] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.520 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:59.520 00:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:25:59.520 00:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:59.520 00:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:59.520 00:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:59.520 00:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:59.520 00:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:59.780 00:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:00.039 [ 00:26:00.039 { 00:26:00.039 "name": "NewBaseBdev", 00:26:00.039 "aliases": [ 00:26:00.039 "9778e48d-1105-4480-9db9-4fe9db8d182a" 00:26:00.039 ], 00:26:00.039 "product_name": "Malloc disk", 00:26:00.039 "block_size": 512, 00:26:00.039 "num_blocks": 65536, 00:26:00.039 "uuid": "9778e48d-1105-4480-9db9-4fe9db8d182a", 00:26:00.039 "assigned_rate_limits": { 00:26:00.039 "rw_ios_per_sec": 0, 00:26:00.039 "rw_mbytes_per_sec": 0, 00:26:00.039 "r_mbytes_per_sec": 0, 00:26:00.039 "w_mbytes_per_sec": 0 00:26:00.039 }, 00:26:00.039 "claimed": true, 00:26:00.039 "claim_type": "exclusive_write", 00:26:00.039 "zoned": false, 00:26:00.039 "supported_io_types": { 00:26:00.039 "read": true, 00:26:00.039 "write": true, 00:26:00.039 "unmap": true, 00:26:00.039 "flush": true, 00:26:00.039 "reset": true, 00:26:00.039 "nvme_admin": false, 00:26:00.039 "nvme_io": false, 00:26:00.039 "nvme_io_md": false, 00:26:00.039 "write_zeroes": true, 00:26:00.039 "zcopy": true, 00:26:00.039 "get_zone_info": false, 00:26:00.039 "zone_management": false, 00:26:00.039 "zone_append": false, 00:26:00.039 "compare": false, 00:26:00.039 "compare_and_write": false, 00:26:00.039 "abort": true, 00:26:00.039 "seek_hole": false, 00:26:00.039 "seek_data": false, 00:26:00.039 "copy": true, 00:26:00.039 "nvme_iov_md": false 00:26:00.039 }, 00:26:00.039 "memory_domains": [ 00:26:00.039 { 00:26:00.039 "dma_device_id": "system", 00:26:00.039 "dma_device_type": 1 00:26:00.039 }, 00:26:00.039 { 00:26:00.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.039 "dma_device_type": 2 00:26:00.039 } 00:26:00.039 ], 00:26:00.039 "driver_specific": {} 00:26:00.039 } 00:26:00.039 ] 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.039 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.299 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.299 "name": "Existed_Raid", 00:26:00.299 "uuid": "d2b8e0a4-9acf-47d3-84ae-333de237e3fc", 00:26:00.299 "strip_size_kb": 64, 00:26:00.299 "state": "online", 00:26:00.299 "raid_level": "concat", 00:26:00.299 "superblock": true, 00:26:00.299 "num_base_bdevs": 4, 00:26:00.299 "num_base_bdevs_discovered": 4, 00:26:00.299 "num_base_bdevs_operational": 4, 00:26:00.299 "base_bdevs_list": [ 00:26:00.299 { 00:26:00.299 "name": "NewBaseBdev", 00:26:00.299 "uuid": "9778e48d-1105-4480-9db9-4fe9db8d182a", 00:26:00.299 "is_configured": true, 00:26:00.299 "data_offset": 2048, 00:26:00.299 "data_size": 63488 00:26:00.299 }, 00:26:00.299 { 00:26:00.299 "name": "BaseBdev2", 00:26:00.299 "uuid": "a98640d4-49ab-4df8-a1e0-d39585519422", 00:26:00.299 "is_configured": true, 00:26:00.299 "data_offset": 2048, 00:26:00.299 "data_size": 63488 00:26:00.299 }, 00:26:00.299 { 00:26:00.299 "name": "BaseBdev3", 00:26:00.299 "uuid": "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01", 00:26:00.299 "is_configured": true, 00:26:00.299 "data_offset": 2048, 00:26:00.299 "data_size": 63488 00:26:00.299 }, 00:26:00.299 { 00:26:00.299 "name": "BaseBdev4", 00:26:00.299 "uuid": "7bdbc9c0-130a-437d-bc43-d16930c87908", 00:26:00.299 "is_configured": true, 00:26:00.299 "data_offset": 2048, 00:26:00.299 "data_size": 63488 00:26:00.299 } 00:26:00.299 ] 00:26:00.299 }' 00:26:00.299 00:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.299 00:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:00.867 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:00.867 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:00.867 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:00.867 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:00.867 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:00.867 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:00.867 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:00.867 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:00.867 [2024-07-25 00:52:23.443816] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:00.867 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:00.867 "name": "Existed_Raid", 00:26:00.867 "aliases": [ 00:26:00.867 "d2b8e0a4-9acf-47d3-84ae-333de237e3fc" 00:26:00.867 ], 00:26:00.867 "product_name": "Raid Volume", 00:26:00.867 "block_size": 512, 00:26:00.867 "num_blocks": 253952, 00:26:00.867 "uuid": "d2b8e0a4-9acf-47d3-84ae-333de237e3fc", 00:26:00.867 "assigned_rate_limits": { 00:26:00.867 "rw_ios_per_sec": 0, 00:26:00.867 "rw_mbytes_per_sec": 0, 00:26:00.867 "r_mbytes_per_sec": 0, 00:26:00.867 "w_mbytes_per_sec": 0 00:26:00.867 }, 00:26:00.867 "claimed": false, 00:26:00.867 "zoned": false, 00:26:00.867 "supported_io_types": { 00:26:00.867 "read": true, 00:26:00.867 "write": true, 00:26:00.867 "unmap": true, 00:26:00.867 "flush": true, 00:26:00.867 "reset": true, 00:26:00.867 "nvme_admin": false, 00:26:00.867 "nvme_io": false, 00:26:00.867 "nvme_io_md": false, 00:26:00.867 "write_zeroes": true, 00:26:00.867 "zcopy": false, 00:26:00.867 "get_zone_info": false, 00:26:00.867 "zone_management": false, 00:26:00.867 "zone_append": false, 00:26:00.868 "compare": false, 00:26:00.868 "compare_and_write": false, 00:26:00.868 "abort": false, 00:26:00.868 "seek_hole": false, 00:26:00.868 "seek_data": false, 00:26:00.868 "copy": false, 00:26:00.868 "nvme_iov_md": false 00:26:00.868 }, 00:26:00.868 "memory_domains": [ 00:26:00.868 { 00:26:00.868 "dma_device_id": "system", 00:26:00.868 "dma_device_type": 1 00:26:00.868 }, 00:26:00.868 { 00:26:00.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.868 "dma_device_type": 2 00:26:00.868 }, 00:26:00.868 { 00:26:00.868 "dma_device_id": "system", 00:26:00.868 "dma_device_type": 1 00:26:00.868 }, 00:26:00.868 { 00:26:00.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.868 "dma_device_type": 2 00:26:00.868 }, 00:26:00.868 { 00:26:00.868 "dma_device_id": "system", 00:26:00.868 "dma_device_type": 1 00:26:00.868 }, 00:26:00.868 { 00:26:00.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.868 "dma_device_type": 2 00:26:00.868 }, 00:26:00.868 { 00:26:00.868 "dma_device_id": "system", 00:26:00.868 "dma_device_type": 1 00:26:00.868 }, 00:26:00.868 { 00:26:00.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.868 "dma_device_type": 2 00:26:00.868 } 00:26:00.868 ], 00:26:00.868 "driver_specific": { 00:26:00.868 "raid": { 00:26:00.868 "uuid": "d2b8e0a4-9acf-47d3-84ae-333de237e3fc", 00:26:00.868 "strip_size_kb": 64, 00:26:00.868 "state": "online", 00:26:00.868 "raid_level": "concat", 00:26:00.868 "superblock": true, 00:26:00.868 "num_base_bdevs": 4, 00:26:00.868 "num_base_bdevs_discovered": 4, 00:26:00.868 "num_base_bdevs_operational": 4, 00:26:00.868 "base_bdevs_list": [ 00:26:00.868 { 00:26:00.868 "name": "NewBaseBdev", 00:26:00.868 "uuid": "9778e48d-1105-4480-9db9-4fe9db8d182a", 00:26:00.868 "is_configured": true, 00:26:00.868 "data_offset": 2048, 00:26:00.868 "data_size": 63488 00:26:00.868 }, 00:26:00.868 { 00:26:00.868 "name": "BaseBdev2", 00:26:00.868 "uuid": "a98640d4-49ab-4df8-a1e0-d39585519422", 00:26:00.868 "is_configured": true, 00:26:00.868 "data_offset": 2048, 00:26:00.868 "data_size": 63488 00:26:00.868 }, 00:26:00.868 { 00:26:00.868 "name": "BaseBdev3", 00:26:00.868 "uuid": "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01", 00:26:00.868 "is_configured": true, 00:26:00.868 "data_offset": 2048, 00:26:00.868 "data_size": 63488 00:26:00.868 }, 00:26:00.868 { 00:26:00.868 "name": "BaseBdev4", 00:26:00.868 "uuid": "7bdbc9c0-130a-437d-bc43-d16930c87908", 00:26:00.868 "is_configured": true, 00:26:00.868 "data_offset": 2048, 00:26:00.868 "data_size": 63488 00:26:00.868 } 00:26:00.868 ] 00:26:00.868 } 00:26:00.868 } 00:26:00.868 }' 00:26:00.868 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:00.868 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:00.868 BaseBdev2 00:26:00.868 BaseBdev3 00:26:00.868 BaseBdev4' 00:26:00.868 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:00.868 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:00.868 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:01.127 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:01.127 "name": "NewBaseBdev", 00:26:01.127 "aliases": [ 00:26:01.127 "9778e48d-1105-4480-9db9-4fe9db8d182a" 00:26:01.127 ], 00:26:01.127 "product_name": "Malloc disk", 00:26:01.127 "block_size": 512, 00:26:01.127 "num_blocks": 65536, 00:26:01.127 "uuid": "9778e48d-1105-4480-9db9-4fe9db8d182a", 00:26:01.127 "assigned_rate_limits": { 00:26:01.127 "rw_ios_per_sec": 0, 00:26:01.127 "rw_mbytes_per_sec": 0, 00:26:01.127 "r_mbytes_per_sec": 0, 00:26:01.127 "w_mbytes_per_sec": 0 00:26:01.127 }, 00:26:01.127 "claimed": true, 00:26:01.127 "claim_type": "exclusive_write", 00:26:01.127 "zoned": false, 00:26:01.127 "supported_io_types": { 00:26:01.127 "read": true, 00:26:01.127 "write": true, 00:26:01.127 "unmap": true, 00:26:01.127 "flush": true, 00:26:01.127 "reset": true, 00:26:01.127 "nvme_admin": false, 00:26:01.127 "nvme_io": false, 00:26:01.127 "nvme_io_md": false, 00:26:01.127 "write_zeroes": true, 00:26:01.127 "zcopy": true, 00:26:01.127 "get_zone_info": false, 00:26:01.127 "zone_management": false, 00:26:01.127 "zone_append": false, 00:26:01.127 "compare": false, 00:26:01.127 "compare_and_write": false, 00:26:01.127 "abort": true, 00:26:01.127 "seek_hole": false, 00:26:01.127 "seek_data": false, 00:26:01.127 "copy": true, 00:26:01.127 "nvme_iov_md": false 00:26:01.127 }, 00:26:01.127 "memory_domains": [ 00:26:01.127 { 00:26:01.127 "dma_device_id": "system", 00:26:01.127 "dma_device_type": 1 00:26:01.127 }, 00:26:01.127 { 00:26:01.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.127 "dma_device_type": 2 00:26:01.127 } 00:26:01.127 ], 00:26:01.127 "driver_specific": {} 00:26:01.127 }' 00:26:01.127 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:01.127 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:01.128 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:01.128 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:01.386 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:01.386 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:01.386 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:01.386 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:01.386 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:01.386 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:01.386 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:01.386 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:01.386 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:01.386 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:01.386 00:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:01.675 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:01.675 "name": "BaseBdev2", 00:26:01.675 "aliases": [ 00:26:01.675 "a98640d4-49ab-4df8-a1e0-d39585519422" 00:26:01.675 ], 00:26:01.675 "product_name": "Malloc disk", 00:26:01.675 "block_size": 512, 00:26:01.675 "num_blocks": 65536, 00:26:01.675 "uuid": "a98640d4-49ab-4df8-a1e0-d39585519422", 00:26:01.675 "assigned_rate_limits": { 00:26:01.675 "rw_ios_per_sec": 0, 00:26:01.675 "rw_mbytes_per_sec": 0, 00:26:01.675 "r_mbytes_per_sec": 0, 00:26:01.675 "w_mbytes_per_sec": 0 00:26:01.675 }, 00:26:01.675 "claimed": true, 00:26:01.675 "claim_type": "exclusive_write", 00:26:01.675 "zoned": false, 00:26:01.675 "supported_io_types": { 00:26:01.675 "read": true, 00:26:01.675 "write": true, 00:26:01.675 "unmap": true, 00:26:01.675 "flush": true, 00:26:01.675 "reset": true, 00:26:01.675 "nvme_admin": false, 00:26:01.675 "nvme_io": false, 00:26:01.675 "nvme_io_md": false, 00:26:01.675 "write_zeroes": true, 00:26:01.675 "zcopy": true, 00:26:01.675 "get_zone_info": false, 00:26:01.675 "zone_management": false, 00:26:01.675 "zone_append": false, 00:26:01.675 "compare": false, 00:26:01.675 "compare_and_write": false, 00:26:01.675 "abort": true, 00:26:01.675 "seek_hole": false, 00:26:01.675 "seek_data": false, 00:26:01.675 "copy": true, 00:26:01.675 "nvme_iov_md": false 00:26:01.675 }, 00:26:01.675 "memory_domains": [ 00:26:01.675 { 00:26:01.675 "dma_device_id": "system", 00:26:01.675 "dma_device_type": 1 00:26:01.675 }, 00:26:01.675 { 00:26:01.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.675 "dma_device_type": 2 00:26:01.675 } 00:26:01.675 ], 00:26:01.675 "driver_specific": {} 00:26:01.675 }' 00:26:01.675 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:01.675 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:01.675 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:01.675 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:01.934 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:01.934 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:01.934 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:01.934 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:01.934 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:01.934 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:01.934 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:01.934 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:01.934 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:01.934 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:01.934 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:02.194 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:02.194 "name": "BaseBdev3", 00:26:02.194 "aliases": [ 00:26:02.194 "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01" 00:26:02.194 ], 00:26:02.194 "product_name": "Malloc disk", 00:26:02.194 "block_size": 512, 00:26:02.194 "num_blocks": 65536, 00:26:02.194 "uuid": "e3fbc144-e804-4bf5-8aa9-0d2e08c1fa01", 00:26:02.194 "assigned_rate_limits": { 00:26:02.194 "rw_ios_per_sec": 0, 00:26:02.194 "rw_mbytes_per_sec": 0, 00:26:02.194 "r_mbytes_per_sec": 0, 00:26:02.194 "w_mbytes_per_sec": 0 00:26:02.194 }, 00:26:02.194 "claimed": true, 00:26:02.194 "claim_type": "exclusive_write", 00:26:02.194 "zoned": false, 00:26:02.194 "supported_io_types": { 00:26:02.194 "read": true, 00:26:02.194 "write": true, 00:26:02.194 "unmap": true, 00:26:02.194 "flush": true, 00:26:02.194 "reset": true, 00:26:02.194 "nvme_admin": false, 00:26:02.194 "nvme_io": false, 00:26:02.194 "nvme_io_md": false, 00:26:02.194 "write_zeroes": true, 00:26:02.194 "zcopy": true, 00:26:02.194 "get_zone_info": false, 00:26:02.194 "zone_management": false, 00:26:02.194 "zone_append": false, 00:26:02.194 "compare": false, 00:26:02.194 "compare_and_write": false, 00:26:02.194 "abort": true, 00:26:02.194 "seek_hole": false, 00:26:02.194 "seek_data": false, 00:26:02.194 "copy": true, 00:26:02.194 "nvme_iov_md": false 00:26:02.194 }, 00:26:02.194 "memory_domains": [ 00:26:02.194 { 00:26:02.194 "dma_device_id": "system", 00:26:02.194 "dma_device_type": 1 00:26:02.194 }, 00:26:02.194 { 00:26:02.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.194 "dma_device_type": 2 00:26:02.194 } 00:26:02.194 ], 00:26:02.194 "driver_specific": {} 00:26:02.194 }' 00:26:02.194 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:02.194 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:02.194 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:02.453 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.453 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.453 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:02.453 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:02.453 00:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:02.453 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:02.453 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:02.453 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:02.712 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:02.712 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:02.712 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:02.712 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:02.971 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:02.971 "name": "BaseBdev4", 00:26:02.971 "aliases": [ 00:26:02.971 "7bdbc9c0-130a-437d-bc43-d16930c87908" 00:26:02.971 ], 00:26:02.971 "product_name": "Malloc disk", 00:26:02.971 "block_size": 512, 00:26:02.971 "num_blocks": 65536, 00:26:02.971 "uuid": "7bdbc9c0-130a-437d-bc43-d16930c87908", 00:26:02.971 "assigned_rate_limits": { 00:26:02.971 "rw_ios_per_sec": 0, 00:26:02.971 "rw_mbytes_per_sec": 0, 00:26:02.971 "r_mbytes_per_sec": 0, 00:26:02.971 "w_mbytes_per_sec": 0 00:26:02.971 }, 00:26:02.971 "claimed": true, 00:26:02.971 "claim_type": "exclusive_write", 00:26:02.971 "zoned": false, 00:26:02.971 "supported_io_types": { 00:26:02.971 "read": true, 00:26:02.971 "write": true, 00:26:02.971 "unmap": true, 00:26:02.971 "flush": true, 00:26:02.971 "reset": true, 00:26:02.971 "nvme_admin": false, 00:26:02.971 "nvme_io": false, 00:26:02.971 "nvme_io_md": false, 00:26:02.971 "write_zeroes": true, 00:26:02.971 "zcopy": true, 00:26:02.971 "get_zone_info": false, 00:26:02.971 "zone_management": false, 00:26:02.971 "zone_append": false, 00:26:02.971 "compare": false, 00:26:02.971 "compare_and_write": false, 00:26:02.971 "abort": true, 00:26:02.971 "seek_hole": false, 00:26:02.971 "seek_data": false, 00:26:02.971 "copy": true, 00:26:02.971 "nvme_iov_md": false 00:26:02.971 }, 00:26:02.971 "memory_domains": [ 00:26:02.971 { 00:26:02.971 "dma_device_id": "system", 00:26:02.971 "dma_device_type": 1 00:26:02.971 }, 00:26:02.971 { 00:26:02.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.971 "dma_device_type": 2 00:26:02.971 } 00:26:02.971 ], 00:26:02.971 "driver_specific": {} 00:26:02.971 }' 00:26:02.971 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:02.971 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:02.971 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:02.971 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.971 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.971 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:02.971 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:02.971 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:03.231 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:03.231 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:03.231 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:03.231 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:03.231 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:03.490 [2024-07-25 00:52:25.936005] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:03.490 [2024-07-25 00:52:25.936204] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:03.490 [2024-07-25 00:52:25.936398] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:03.490 [2024-07-25 00:52:25.936492] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:03.490 [2024-07-25 00:52:25.936650] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:26:03.490 00:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 139139 00:26:03.490 00:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 139139 ']' 00:26:03.490 00:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 139139 00:26:03.490 00:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:26:03.490 00:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:03.490 00:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 139139 00:26:03.490 00:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:03.490 00:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:03.490 00:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 139139' 00:26:03.490 killing process with pid 139139 00:26:03.490 00:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 139139 00:26:03.490 00:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 139139 00:26:03.490 [2024-07-25 00:52:25.982427] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:03.749 [2024-07-25 00:52:26.377842] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:05.126 ************************************ 00:26:05.127 END TEST raid_state_function_test_sb 00:26:05.127 ************************************ 00:26:05.127 00:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:26:05.127 00:26:05.127 real 0m30.969s 00:26:05.127 user 0m55.483s 00:26:05.127 sys 0m4.847s 00:26:05.127 00:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:05.127 00:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.127 00:52:27 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:26:05.127 00:52:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:26:05.127 00:52:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.127 00:52:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:05.127 ************************************ 00:26:05.127 START TEST raid_superblock_test 00:26:05.127 ************************************ 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=140214 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 140214 /var/tmp/spdk-raid.sock 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 140214 ']' 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:05.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.127 00:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:05.386 [2024-07-25 00:52:27.834997] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:26:05.386 [2024-07-25 00:52:27.835217] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140214 ] 00:26:05.386 [2024-07-25 00:52:28.021711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.646 [2024-07-25 00:52:28.294330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.904 [2024-07-25 00:52:28.496420] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:06.163 00:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:06.163 00:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:26:06.163 00:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:26:06.163 00:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:06.163 00:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:26:06.163 00:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:26:06.163 00:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:06.163 00:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:06.163 00:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:06.163 00:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:06.163 00:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:06.422 malloc1 00:26:06.422 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:06.682 [2024-07-25 00:52:29.246288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:06.682 [2024-07-25 00:52:29.246382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:06.682 [2024-07-25 00:52:29.246432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:06.682 [2024-07-25 00:52:29.246451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:06.682 [2024-07-25 00:52:29.248746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:06.682 [2024-07-25 00:52:29.248793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:06.682 pt1 00:26:06.682 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:06.682 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:06.682 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:26:06.682 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:26:06.682 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:06.682 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:06.682 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:06.682 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:06.682 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:06.940 malloc2 00:26:06.940 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:07.197 [2024-07-25 00:52:29.645373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:07.198 [2024-07-25 00:52:29.645470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.198 [2024-07-25 00:52:29.645519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:26:07.198 [2024-07-25 00:52:29.645538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.198 [2024-07-25 00:52:29.647737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.198 [2024-07-25 00:52:29.647781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:07.198 pt2 00:26:07.198 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:07.198 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:07.198 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:26:07.198 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:26:07.198 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:07.198 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:07.198 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:07.198 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:07.198 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:07.456 malloc3 00:26:07.456 00:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:07.456 [2024-07-25 00:52:30.106917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:07.456 [2024-07-25 00:52:30.107030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.456 [2024-07-25 00:52:30.107063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:07.456 [2024-07-25 00:52:30.107104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.715 [2024-07-25 00:52:30.109624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.715 [2024-07-25 00:52:30.109683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:07.715 pt3 00:26:07.715 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:07.715 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:07.715 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:26:07.715 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:26:07.715 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:07.715 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:07.715 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:07.715 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:07.715 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:07.973 malloc4 00:26:07.973 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:07.973 [2024-07-25 00:52:30.549833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:07.973 [2024-07-25 00:52:30.549949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.973 [2024-07-25 00:52:30.549981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:07.973 [2024-07-25 00:52:30.550005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.973 [2024-07-25 00:52:30.552663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.973 [2024-07-25 00:52:30.552732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:07.973 pt4 00:26:07.973 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:07.973 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:07.973 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:08.233 [2024-07-25 00:52:30.725880] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:08.233 [2024-07-25 00:52:30.727941] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:08.233 [2024-07-25 00:52:30.728006] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:08.233 [2024-07-25 00:52:30.728069] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:08.233 [2024-07-25 00:52:30.728579] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:26:08.233 [2024-07-25 00:52:30.728601] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:08.233 [2024-07-25 00:52:30.728746] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:08.233 [2024-07-25 00:52:30.729305] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:26:08.233 [2024-07-25 00:52:30.729325] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:26:08.233 [2024-07-25 00:52:30.729571] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.233 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.492 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:08.492 "name": "raid_bdev1", 00:26:08.492 "uuid": "6c237658-1604-48ef-b0fb-f590bab3cc73", 00:26:08.492 "strip_size_kb": 64, 00:26:08.492 "state": "online", 00:26:08.492 "raid_level": "concat", 00:26:08.492 "superblock": true, 00:26:08.492 "num_base_bdevs": 4, 00:26:08.492 "num_base_bdevs_discovered": 4, 00:26:08.492 "num_base_bdevs_operational": 4, 00:26:08.492 "base_bdevs_list": [ 00:26:08.492 { 00:26:08.492 "name": "pt1", 00:26:08.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:08.492 "is_configured": true, 00:26:08.492 "data_offset": 2048, 00:26:08.492 "data_size": 63488 00:26:08.492 }, 00:26:08.492 { 00:26:08.492 "name": "pt2", 00:26:08.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:08.492 "is_configured": true, 00:26:08.492 "data_offset": 2048, 00:26:08.492 "data_size": 63488 00:26:08.492 }, 00:26:08.492 { 00:26:08.492 "name": "pt3", 00:26:08.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:08.492 "is_configured": true, 00:26:08.492 "data_offset": 2048, 00:26:08.492 "data_size": 63488 00:26:08.492 }, 00:26:08.492 { 00:26:08.492 "name": "pt4", 00:26:08.492 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:08.492 "is_configured": true, 00:26:08.492 "data_offset": 2048, 00:26:08.492 "data_size": 63488 00:26:08.492 } 00:26:08.492 ] 00:26:08.492 }' 00:26:08.492 00:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:08.492 00:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.060 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:26:09.060 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:09.060 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:09.060 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:09.060 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:09.060 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:09.060 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:09.060 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:09.060 [2024-07-25 00:52:31.674289] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:09.060 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:09.060 "name": "raid_bdev1", 00:26:09.060 "aliases": [ 00:26:09.060 "6c237658-1604-48ef-b0fb-f590bab3cc73" 00:26:09.060 ], 00:26:09.060 "product_name": "Raid Volume", 00:26:09.060 "block_size": 512, 00:26:09.060 "num_blocks": 253952, 00:26:09.060 "uuid": "6c237658-1604-48ef-b0fb-f590bab3cc73", 00:26:09.060 "assigned_rate_limits": { 00:26:09.060 "rw_ios_per_sec": 0, 00:26:09.060 "rw_mbytes_per_sec": 0, 00:26:09.060 "r_mbytes_per_sec": 0, 00:26:09.060 "w_mbytes_per_sec": 0 00:26:09.060 }, 00:26:09.060 "claimed": false, 00:26:09.060 "zoned": false, 00:26:09.060 "supported_io_types": { 00:26:09.060 "read": true, 00:26:09.060 "write": true, 00:26:09.060 "unmap": true, 00:26:09.060 "flush": true, 00:26:09.060 "reset": true, 00:26:09.060 "nvme_admin": false, 00:26:09.060 "nvme_io": false, 00:26:09.060 "nvme_io_md": false, 00:26:09.060 "write_zeroes": true, 00:26:09.060 "zcopy": false, 00:26:09.060 "get_zone_info": false, 00:26:09.060 "zone_management": false, 00:26:09.060 "zone_append": false, 00:26:09.060 "compare": false, 00:26:09.060 "compare_and_write": false, 00:26:09.060 "abort": false, 00:26:09.060 "seek_hole": false, 00:26:09.060 "seek_data": false, 00:26:09.060 "copy": false, 00:26:09.060 "nvme_iov_md": false 00:26:09.060 }, 00:26:09.060 "memory_domains": [ 00:26:09.060 { 00:26:09.060 "dma_device_id": "system", 00:26:09.060 "dma_device_type": 1 00:26:09.060 }, 00:26:09.060 { 00:26:09.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.060 "dma_device_type": 2 00:26:09.060 }, 00:26:09.060 { 00:26:09.060 "dma_device_id": "system", 00:26:09.060 "dma_device_type": 1 00:26:09.060 }, 00:26:09.060 { 00:26:09.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.060 "dma_device_type": 2 00:26:09.060 }, 00:26:09.060 { 00:26:09.060 "dma_device_id": "system", 00:26:09.060 "dma_device_type": 1 00:26:09.060 }, 00:26:09.060 { 00:26:09.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.060 "dma_device_type": 2 00:26:09.060 }, 00:26:09.060 { 00:26:09.060 "dma_device_id": "system", 00:26:09.060 "dma_device_type": 1 00:26:09.060 }, 00:26:09.060 { 00:26:09.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.060 "dma_device_type": 2 00:26:09.060 } 00:26:09.060 ], 00:26:09.060 "driver_specific": { 00:26:09.060 "raid": { 00:26:09.060 "uuid": "6c237658-1604-48ef-b0fb-f590bab3cc73", 00:26:09.060 "strip_size_kb": 64, 00:26:09.060 "state": "online", 00:26:09.060 "raid_level": "concat", 00:26:09.060 "superblock": true, 00:26:09.060 "num_base_bdevs": 4, 00:26:09.060 "num_base_bdevs_discovered": 4, 00:26:09.060 "num_base_bdevs_operational": 4, 00:26:09.060 "base_bdevs_list": [ 00:26:09.060 { 00:26:09.060 "name": "pt1", 00:26:09.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:09.060 "is_configured": true, 00:26:09.060 "data_offset": 2048, 00:26:09.060 "data_size": 63488 00:26:09.060 }, 00:26:09.060 { 00:26:09.060 "name": "pt2", 00:26:09.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:09.060 "is_configured": true, 00:26:09.060 "data_offset": 2048, 00:26:09.060 "data_size": 63488 00:26:09.060 }, 00:26:09.060 { 00:26:09.060 "name": "pt3", 00:26:09.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:09.060 "is_configured": true, 00:26:09.060 "data_offset": 2048, 00:26:09.060 "data_size": 63488 00:26:09.060 }, 00:26:09.060 { 00:26:09.060 "name": "pt4", 00:26:09.060 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:09.060 "is_configured": true, 00:26:09.060 "data_offset": 2048, 00:26:09.060 "data_size": 63488 00:26:09.060 } 00:26:09.060 ] 00:26:09.060 } 00:26:09.060 } 00:26:09.060 }' 00:26:09.060 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:09.320 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:09.320 pt2 00:26:09.320 pt3 00:26:09.320 pt4' 00:26:09.320 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:09.320 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:09.320 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:09.320 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:09.320 "name": "pt1", 00:26:09.320 "aliases": [ 00:26:09.320 "00000000-0000-0000-0000-000000000001" 00:26:09.320 ], 00:26:09.320 "product_name": "passthru", 00:26:09.320 "block_size": 512, 00:26:09.320 "num_blocks": 65536, 00:26:09.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:09.320 "assigned_rate_limits": { 00:26:09.320 "rw_ios_per_sec": 0, 00:26:09.320 "rw_mbytes_per_sec": 0, 00:26:09.320 "r_mbytes_per_sec": 0, 00:26:09.320 "w_mbytes_per_sec": 0 00:26:09.320 }, 00:26:09.320 "claimed": true, 00:26:09.320 "claim_type": "exclusive_write", 00:26:09.320 "zoned": false, 00:26:09.320 "supported_io_types": { 00:26:09.320 "read": true, 00:26:09.320 "write": true, 00:26:09.320 "unmap": true, 00:26:09.320 "flush": true, 00:26:09.320 "reset": true, 00:26:09.320 "nvme_admin": false, 00:26:09.320 "nvme_io": false, 00:26:09.320 "nvme_io_md": false, 00:26:09.320 "write_zeroes": true, 00:26:09.320 "zcopy": true, 00:26:09.320 "get_zone_info": false, 00:26:09.320 "zone_management": false, 00:26:09.320 "zone_append": false, 00:26:09.320 "compare": false, 00:26:09.320 "compare_and_write": false, 00:26:09.320 "abort": true, 00:26:09.320 "seek_hole": false, 00:26:09.320 "seek_data": false, 00:26:09.320 "copy": true, 00:26:09.320 "nvme_iov_md": false 00:26:09.320 }, 00:26:09.320 "memory_domains": [ 00:26:09.320 { 00:26:09.320 "dma_device_id": "system", 00:26:09.320 "dma_device_type": 1 00:26:09.320 }, 00:26:09.320 { 00:26:09.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.320 "dma_device_type": 2 00:26:09.320 } 00:26:09.320 ], 00:26:09.320 "driver_specific": { 00:26:09.320 "passthru": { 00:26:09.320 "name": "pt1", 00:26:09.320 "base_bdev_name": "malloc1" 00:26:09.320 } 00:26:09.320 } 00:26:09.320 }' 00:26:09.320 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.320 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.579 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:09.579 00:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.579 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.579 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:09.579 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.579 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.579 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:09.579 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.579 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.579 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:09.579 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:09.579 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:09.579 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:10.160 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:10.160 "name": "pt2", 00:26:10.160 "aliases": [ 00:26:10.160 "00000000-0000-0000-0000-000000000002" 00:26:10.160 ], 00:26:10.160 "product_name": "passthru", 00:26:10.160 "block_size": 512, 00:26:10.160 "num_blocks": 65536, 00:26:10.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:10.160 "assigned_rate_limits": { 00:26:10.160 "rw_ios_per_sec": 0, 00:26:10.160 "rw_mbytes_per_sec": 0, 00:26:10.160 "r_mbytes_per_sec": 0, 00:26:10.160 "w_mbytes_per_sec": 0 00:26:10.160 }, 00:26:10.160 "claimed": true, 00:26:10.160 "claim_type": "exclusive_write", 00:26:10.160 "zoned": false, 00:26:10.160 "supported_io_types": { 00:26:10.160 "read": true, 00:26:10.160 "write": true, 00:26:10.160 "unmap": true, 00:26:10.160 "flush": true, 00:26:10.160 "reset": true, 00:26:10.160 "nvme_admin": false, 00:26:10.160 "nvme_io": false, 00:26:10.160 "nvme_io_md": false, 00:26:10.160 "write_zeroes": true, 00:26:10.160 "zcopy": true, 00:26:10.160 "get_zone_info": false, 00:26:10.160 "zone_management": false, 00:26:10.160 "zone_append": false, 00:26:10.160 "compare": false, 00:26:10.160 "compare_and_write": false, 00:26:10.160 "abort": true, 00:26:10.160 "seek_hole": false, 00:26:10.160 "seek_data": false, 00:26:10.160 "copy": true, 00:26:10.160 "nvme_iov_md": false 00:26:10.160 }, 00:26:10.160 "memory_domains": [ 00:26:10.160 { 00:26:10.160 "dma_device_id": "system", 00:26:10.160 "dma_device_type": 1 00:26:10.160 }, 00:26:10.160 { 00:26:10.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.160 "dma_device_type": 2 00:26:10.160 } 00:26:10.160 ], 00:26:10.160 "driver_specific": { 00:26:10.160 "passthru": { 00:26:10.160 "name": "pt2", 00:26:10.160 "base_bdev_name": "malloc2" 00:26:10.160 } 00:26:10.160 } 00:26:10.160 }' 00:26:10.160 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.160 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.160 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:10.160 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.160 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.160 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:10.160 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.160 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.160 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:10.160 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.433 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.433 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:10.433 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:10.433 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:10.433 00:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:10.692 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:10.692 "name": "pt3", 00:26:10.692 "aliases": [ 00:26:10.692 "00000000-0000-0000-0000-000000000003" 00:26:10.692 ], 00:26:10.692 "product_name": "passthru", 00:26:10.692 "block_size": 512, 00:26:10.692 "num_blocks": 65536, 00:26:10.692 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:10.692 "assigned_rate_limits": { 00:26:10.692 "rw_ios_per_sec": 0, 00:26:10.692 "rw_mbytes_per_sec": 0, 00:26:10.692 "r_mbytes_per_sec": 0, 00:26:10.692 "w_mbytes_per_sec": 0 00:26:10.692 }, 00:26:10.692 "claimed": true, 00:26:10.692 "claim_type": "exclusive_write", 00:26:10.692 "zoned": false, 00:26:10.692 "supported_io_types": { 00:26:10.692 "read": true, 00:26:10.692 "write": true, 00:26:10.692 "unmap": true, 00:26:10.692 "flush": true, 00:26:10.692 "reset": true, 00:26:10.692 "nvme_admin": false, 00:26:10.692 "nvme_io": false, 00:26:10.692 "nvme_io_md": false, 00:26:10.692 "write_zeroes": true, 00:26:10.692 "zcopy": true, 00:26:10.692 "get_zone_info": false, 00:26:10.692 "zone_management": false, 00:26:10.692 "zone_append": false, 00:26:10.692 "compare": false, 00:26:10.692 "compare_and_write": false, 00:26:10.692 "abort": true, 00:26:10.692 "seek_hole": false, 00:26:10.692 "seek_data": false, 00:26:10.692 "copy": true, 00:26:10.692 "nvme_iov_md": false 00:26:10.692 }, 00:26:10.692 "memory_domains": [ 00:26:10.692 { 00:26:10.692 "dma_device_id": "system", 00:26:10.692 "dma_device_type": 1 00:26:10.692 }, 00:26:10.692 { 00:26:10.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.692 "dma_device_type": 2 00:26:10.692 } 00:26:10.692 ], 00:26:10.692 "driver_specific": { 00:26:10.692 "passthru": { 00:26:10.692 "name": "pt3", 00:26:10.692 "base_bdev_name": "malloc3" 00:26:10.692 } 00:26:10.692 } 00:26:10.692 }' 00:26:10.692 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.692 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.692 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:10.692 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.692 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.692 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:10.692 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.692 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.950 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:10.950 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.950 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.950 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:10.950 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:10.950 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:10.950 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:11.208 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:11.208 "name": "pt4", 00:26:11.208 "aliases": [ 00:26:11.208 "00000000-0000-0000-0000-000000000004" 00:26:11.208 ], 00:26:11.208 "product_name": "passthru", 00:26:11.208 "block_size": 512, 00:26:11.208 "num_blocks": 65536, 00:26:11.208 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:11.208 "assigned_rate_limits": { 00:26:11.208 "rw_ios_per_sec": 0, 00:26:11.208 "rw_mbytes_per_sec": 0, 00:26:11.208 "r_mbytes_per_sec": 0, 00:26:11.208 "w_mbytes_per_sec": 0 00:26:11.208 }, 00:26:11.208 "claimed": true, 00:26:11.208 "claim_type": "exclusive_write", 00:26:11.208 "zoned": false, 00:26:11.208 "supported_io_types": { 00:26:11.208 "read": true, 00:26:11.208 "write": true, 00:26:11.208 "unmap": true, 00:26:11.208 "flush": true, 00:26:11.208 "reset": true, 00:26:11.208 "nvme_admin": false, 00:26:11.208 "nvme_io": false, 00:26:11.208 "nvme_io_md": false, 00:26:11.208 "write_zeroes": true, 00:26:11.208 "zcopy": true, 00:26:11.208 "get_zone_info": false, 00:26:11.208 "zone_management": false, 00:26:11.208 "zone_append": false, 00:26:11.208 "compare": false, 00:26:11.208 "compare_and_write": false, 00:26:11.208 "abort": true, 00:26:11.208 "seek_hole": false, 00:26:11.208 "seek_data": false, 00:26:11.208 "copy": true, 00:26:11.208 "nvme_iov_md": false 00:26:11.208 }, 00:26:11.208 "memory_domains": [ 00:26:11.208 { 00:26:11.208 "dma_device_id": "system", 00:26:11.208 "dma_device_type": 1 00:26:11.208 }, 00:26:11.208 { 00:26:11.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:11.208 "dma_device_type": 2 00:26:11.208 } 00:26:11.208 ], 00:26:11.208 "driver_specific": { 00:26:11.208 "passthru": { 00:26:11.208 "name": "pt4", 00:26:11.208 "base_bdev_name": "malloc4" 00:26:11.208 } 00:26:11.208 } 00:26:11.208 }' 00:26:11.208 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:11.208 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:11.208 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:11.208 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:11.467 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:11.467 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:11.467 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:11.467 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:11.467 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:11.467 00:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:11.467 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:11.467 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:11.467 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:11.467 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:26:11.726 [2024-07-25 00:52:34.335268] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:11.726 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6c237658-1604-48ef-b0fb-f590bab3cc73 00:26:11.726 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 6c237658-1604-48ef-b0fb-f590bab3cc73 ']' 00:26:11.726 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:11.985 [2024-07-25 00:52:34.515044] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:11.985 [2024-07-25 00:52:34.515075] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:11.985 [2024-07-25 00:52:34.515154] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:11.985 [2024-07-25 00:52:34.515221] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:11.985 [2024-07-25 00:52:34.515230] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:26:11.985 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.985 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:26:12.244 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:26:12.244 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:26:12.244 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:12.244 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:12.502 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:12.502 00:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:12.760 00:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:12.760 00:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:12.760 00:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:12.760 00:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:13.020 00:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:13.020 00:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:13.278 00:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:26:13.278 00:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:13.278 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:26:13.278 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:13.278 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:13.278 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:13.278 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:13.278 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:13.278 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:13.278 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:13.278 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:13.279 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:13.279 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:13.537 [2024-07-25 00:52:35.984369] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:13.537 [2024-07-25 00:52:35.986540] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:13.537 [2024-07-25 00:52:35.986600] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:13.537 [2024-07-25 00:52:35.986632] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:13.537 [2024-07-25 00:52:35.986675] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:13.537 [2024-07-25 00:52:35.987109] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:13.537 [2024-07-25 00:52:35.987163] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:13.537 [2024-07-25 00:52:35.987197] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:13.537 [2024-07-25 00:52:35.987220] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:13.537 [2024-07-25 00:52:35.987229] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:26:13.537 request: 00:26:13.537 { 00:26:13.537 "name": "raid_bdev1", 00:26:13.537 "raid_level": "concat", 00:26:13.537 "base_bdevs": [ 00:26:13.537 "malloc1", 00:26:13.537 "malloc2", 00:26:13.537 "malloc3", 00:26:13.537 "malloc4" 00:26:13.537 ], 00:26:13.537 "strip_size_kb": 64, 00:26:13.537 "superblock": false, 00:26:13.537 "method": "bdev_raid_create", 00:26:13.537 "req_id": 1 00:26:13.537 } 00:26:13.537 Got JSON-RPC error response 00:26:13.537 response: 00:26:13.537 { 00:26:13.537 "code": -17, 00:26:13.537 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:13.537 } 00:26:13.537 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:26:13.537 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:13.537 00:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:13.537 00:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:13.537 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:26:13.537 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.796 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:26:13.796 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:26:13.796 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:13.796 [2024-07-25 00:52:36.428413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:13.796 [2024-07-25 00:52:36.428512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.796 [2024-07-25 00:52:36.428541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:13.796 [2024-07-25 00:52:36.428582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.796 [2024-07-25 00:52:36.431164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.796 [2024-07-25 00:52:36.431216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:13.796 [2024-07-25 00:52:36.431326] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:13.796 [2024-07-25 00:52:36.431385] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:13.796 pt1 00:26:13.796 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:26:13.796 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:13.796 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:13.796 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:13.797 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:13.797 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:13.797 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.797 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.055 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.055 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.055 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.055 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.055 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.055 "name": "raid_bdev1", 00:26:14.055 "uuid": "6c237658-1604-48ef-b0fb-f590bab3cc73", 00:26:14.055 "strip_size_kb": 64, 00:26:14.055 "state": "configuring", 00:26:14.055 "raid_level": "concat", 00:26:14.055 "superblock": true, 00:26:14.055 "num_base_bdevs": 4, 00:26:14.055 "num_base_bdevs_discovered": 1, 00:26:14.055 "num_base_bdevs_operational": 4, 00:26:14.055 "base_bdevs_list": [ 00:26:14.055 { 00:26:14.055 "name": "pt1", 00:26:14.055 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:14.055 "is_configured": true, 00:26:14.055 "data_offset": 2048, 00:26:14.055 "data_size": 63488 00:26:14.055 }, 00:26:14.055 { 00:26:14.055 "name": null, 00:26:14.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.055 "is_configured": false, 00:26:14.055 "data_offset": 2048, 00:26:14.055 "data_size": 63488 00:26:14.055 }, 00:26:14.055 { 00:26:14.055 "name": null, 00:26:14.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:14.055 "is_configured": false, 00:26:14.055 "data_offset": 2048, 00:26:14.055 "data_size": 63488 00:26:14.055 }, 00:26:14.055 { 00:26:14.055 "name": null, 00:26:14.055 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:14.055 "is_configured": false, 00:26:14.055 "data_offset": 2048, 00:26:14.055 "data_size": 63488 00:26:14.055 } 00:26:14.055 ] 00:26:14.055 }' 00:26:14.055 00:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.055 00:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.622 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:26:14.622 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:14.880 [2024-07-25 00:52:37.357253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:14.880 [2024-07-25 00:52:37.357357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.880 [2024-07-25 00:52:37.357398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:14.880 [2024-07-25 00:52:37.357433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.880 [2024-07-25 00:52:37.358193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.880 [2024-07-25 00:52:37.358252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:14.880 [2024-07-25 00:52:37.358368] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:14.880 [2024-07-25 00:52:37.358503] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:14.880 pt2 00:26:14.880 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:15.139 [2024-07-25 00:52:37.629336] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.139 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.398 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.398 "name": "raid_bdev1", 00:26:15.398 "uuid": "6c237658-1604-48ef-b0fb-f590bab3cc73", 00:26:15.398 "strip_size_kb": 64, 00:26:15.398 "state": "configuring", 00:26:15.398 "raid_level": "concat", 00:26:15.398 "superblock": true, 00:26:15.398 "num_base_bdevs": 4, 00:26:15.398 "num_base_bdevs_discovered": 1, 00:26:15.398 "num_base_bdevs_operational": 4, 00:26:15.398 "base_bdevs_list": [ 00:26:15.398 { 00:26:15.398 "name": "pt1", 00:26:15.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:15.398 "is_configured": true, 00:26:15.398 "data_offset": 2048, 00:26:15.398 "data_size": 63488 00:26:15.398 }, 00:26:15.398 { 00:26:15.398 "name": null, 00:26:15.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:15.398 "is_configured": false, 00:26:15.398 "data_offset": 2048, 00:26:15.398 "data_size": 63488 00:26:15.398 }, 00:26:15.398 { 00:26:15.398 "name": null, 00:26:15.398 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:15.398 "is_configured": false, 00:26:15.398 "data_offset": 2048, 00:26:15.398 "data_size": 63488 00:26:15.398 }, 00:26:15.398 { 00:26:15.398 "name": null, 00:26:15.398 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:15.398 "is_configured": false, 00:26:15.398 "data_offset": 2048, 00:26:15.398 "data_size": 63488 00:26:15.398 } 00:26:15.398 ] 00:26:15.398 }' 00:26:15.398 00:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.398 00:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.967 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:26:15.967 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:15.967 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:15.967 [2024-07-25 00:52:38.589521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:15.967 [2024-07-25 00:52:38.589617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.967 [2024-07-25 00:52:38.589650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:15.967 [2024-07-25 00:52:38.589691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.967 [2024-07-25 00:52:38.590380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.967 [2024-07-25 00:52:38.590424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:15.967 [2024-07-25 00:52:38.590534] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:15.967 [2024-07-25 00:52:38.590555] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:15.967 pt2 00:26:15.967 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:15.967 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:15.967 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:16.226 [2024-07-25 00:52:38.765548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:16.226 [2024-07-25 00:52:38.765625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.226 [2024-07-25 00:52:38.765668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:16.226 [2024-07-25 00:52:38.765713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.226 [2024-07-25 00:52:38.766386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.226 [2024-07-25 00:52:38.766433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:16.226 [2024-07-25 00:52:38.766535] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:16.226 [2024-07-25 00:52:38.766556] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:16.226 pt3 00:26:16.226 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:16.226 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:16.226 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:16.485 [2024-07-25 00:52:38.941650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:16.485 [2024-07-25 00:52:38.941719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.485 [2024-07-25 00:52:38.941763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:16.485 [2024-07-25 00:52:38.941803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.485 [2024-07-25 00:52:38.942423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.485 [2024-07-25 00:52:38.942470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:16.485 [2024-07-25 00:52:38.942570] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:16.485 [2024-07-25 00:52:38.942601] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:16.485 [2024-07-25 00:52:38.942951] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:26:16.485 [2024-07-25 00:52:38.942971] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:16.485 [2024-07-25 00:52:38.943066] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:16.485 [2024-07-25 00:52:38.943621] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:26:16.485 [2024-07-25 00:52:38.943642] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:26:16.485 [2024-07-25 00:52:38.943770] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.485 pt4 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.485 00:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.744 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.744 "name": "raid_bdev1", 00:26:16.744 "uuid": "6c237658-1604-48ef-b0fb-f590bab3cc73", 00:26:16.744 "strip_size_kb": 64, 00:26:16.744 "state": "online", 00:26:16.744 "raid_level": "concat", 00:26:16.744 "superblock": true, 00:26:16.744 "num_base_bdevs": 4, 00:26:16.744 "num_base_bdevs_discovered": 4, 00:26:16.744 "num_base_bdevs_operational": 4, 00:26:16.744 "base_bdevs_list": [ 00:26:16.744 { 00:26:16.744 "name": "pt1", 00:26:16.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:16.744 "is_configured": true, 00:26:16.744 "data_offset": 2048, 00:26:16.744 "data_size": 63488 00:26:16.744 }, 00:26:16.744 { 00:26:16.744 "name": "pt2", 00:26:16.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:16.744 "is_configured": true, 00:26:16.744 "data_offset": 2048, 00:26:16.744 "data_size": 63488 00:26:16.744 }, 00:26:16.744 { 00:26:16.744 "name": "pt3", 00:26:16.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:16.744 "is_configured": true, 00:26:16.744 "data_offset": 2048, 00:26:16.744 "data_size": 63488 00:26:16.744 }, 00:26:16.744 { 00:26:16.744 "name": "pt4", 00:26:16.744 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:16.744 "is_configured": true, 00:26:16.744 "data_offset": 2048, 00:26:16.744 "data_size": 63488 00:26:16.744 } 00:26:16.744 ] 00:26:16.744 }' 00:26:16.744 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.744 00:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.311 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:26:17.311 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:17.311 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:17.311 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:17.311 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:17.311 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:17.311 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:17.311 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:17.311 [2024-07-25 00:52:39.834215] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:17.311 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:17.311 "name": "raid_bdev1", 00:26:17.311 "aliases": [ 00:26:17.311 "6c237658-1604-48ef-b0fb-f590bab3cc73" 00:26:17.312 ], 00:26:17.312 "product_name": "Raid Volume", 00:26:17.312 "block_size": 512, 00:26:17.312 "num_blocks": 253952, 00:26:17.312 "uuid": "6c237658-1604-48ef-b0fb-f590bab3cc73", 00:26:17.312 "assigned_rate_limits": { 00:26:17.312 "rw_ios_per_sec": 0, 00:26:17.312 "rw_mbytes_per_sec": 0, 00:26:17.312 "r_mbytes_per_sec": 0, 00:26:17.312 "w_mbytes_per_sec": 0 00:26:17.312 }, 00:26:17.312 "claimed": false, 00:26:17.312 "zoned": false, 00:26:17.312 "supported_io_types": { 00:26:17.312 "read": true, 00:26:17.312 "write": true, 00:26:17.312 "unmap": true, 00:26:17.312 "flush": true, 00:26:17.312 "reset": true, 00:26:17.312 "nvme_admin": false, 00:26:17.312 "nvme_io": false, 00:26:17.312 "nvme_io_md": false, 00:26:17.312 "write_zeroes": true, 00:26:17.312 "zcopy": false, 00:26:17.312 "get_zone_info": false, 00:26:17.312 "zone_management": false, 00:26:17.312 "zone_append": false, 00:26:17.312 "compare": false, 00:26:17.312 "compare_and_write": false, 00:26:17.312 "abort": false, 00:26:17.312 "seek_hole": false, 00:26:17.312 "seek_data": false, 00:26:17.312 "copy": false, 00:26:17.312 "nvme_iov_md": false 00:26:17.312 }, 00:26:17.312 "memory_domains": [ 00:26:17.312 { 00:26:17.312 "dma_device_id": "system", 00:26:17.312 "dma_device_type": 1 00:26:17.312 }, 00:26:17.312 { 00:26:17.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.312 "dma_device_type": 2 00:26:17.312 }, 00:26:17.312 { 00:26:17.312 "dma_device_id": "system", 00:26:17.312 "dma_device_type": 1 00:26:17.312 }, 00:26:17.312 { 00:26:17.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.312 "dma_device_type": 2 00:26:17.312 }, 00:26:17.312 { 00:26:17.312 "dma_device_id": "system", 00:26:17.312 "dma_device_type": 1 00:26:17.312 }, 00:26:17.312 { 00:26:17.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.312 "dma_device_type": 2 00:26:17.312 }, 00:26:17.312 { 00:26:17.312 "dma_device_id": "system", 00:26:17.312 "dma_device_type": 1 00:26:17.312 }, 00:26:17.312 { 00:26:17.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.312 "dma_device_type": 2 00:26:17.312 } 00:26:17.312 ], 00:26:17.312 "driver_specific": { 00:26:17.312 "raid": { 00:26:17.312 "uuid": "6c237658-1604-48ef-b0fb-f590bab3cc73", 00:26:17.312 "strip_size_kb": 64, 00:26:17.312 "state": "online", 00:26:17.312 "raid_level": "concat", 00:26:17.312 "superblock": true, 00:26:17.312 "num_base_bdevs": 4, 00:26:17.312 "num_base_bdevs_discovered": 4, 00:26:17.312 "num_base_bdevs_operational": 4, 00:26:17.312 "base_bdevs_list": [ 00:26:17.312 { 00:26:17.312 "name": "pt1", 00:26:17.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:17.312 "is_configured": true, 00:26:17.312 "data_offset": 2048, 00:26:17.312 "data_size": 63488 00:26:17.312 }, 00:26:17.312 { 00:26:17.312 "name": "pt2", 00:26:17.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:17.312 "is_configured": true, 00:26:17.312 "data_offset": 2048, 00:26:17.312 "data_size": 63488 00:26:17.312 }, 00:26:17.312 { 00:26:17.312 "name": "pt3", 00:26:17.312 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:17.312 "is_configured": true, 00:26:17.312 "data_offset": 2048, 00:26:17.312 "data_size": 63488 00:26:17.312 }, 00:26:17.312 { 00:26:17.312 "name": "pt4", 00:26:17.312 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:17.312 "is_configured": true, 00:26:17.312 "data_offset": 2048, 00:26:17.312 "data_size": 63488 00:26:17.312 } 00:26:17.312 ] 00:26:17.312 } 00:26:17.312 } 00:26:17.312 }' 00:26:17.312 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:17.312 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:17.312 pt2 00:26:17.312 pt3 00:26:17.312 pt4' 00:26:17.312 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.312 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:17.312 00:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:17.571 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:17.571 "name": "pt1", 00:26:17.571 "aliases": [ 00:26:17.571 "00000000-0000-0000-0000-000000000001" 00:26:17.571 ], 00:26:17.571 "product_name": "passthru", 00:26:17.571 "block_size": 512, 00:26:17.571 "num_blocks": 65536, 00:26:17.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:17.571 "assigned_rate_limits": { 00:26:17.571 "rw_ios_per_sec": 0, 00:26:17.571 "rw_mbytes_per_sec": 0, 00:26:17.571 "r_mbytes_per_sec": 0, 00:26:17.571 "w_mbytes_per_sec": 0 00:26:17.571 }, 00:26:17.571 "claimed": true, 00:26:17.571 "claim_type": "exclusive_write", 00:26:17.571 "zoned": false, 00:26:17.571 "supported_io_types": { 00:26:17.571 "read": true, 00:26:17.571 "write": true, 00:26:17.571 "unmap": true, 00:26:17.571 "flush": true, 00:26:17.571 "reset": true, 00:26:17.571 "nvme_admin": false, 00:26:17.571 "nvme_io": false, 00:26:17.571 "nvme_io_md": false, 00:26:17.571 "write_zeroes": true, 00:26:17.571 "zcopy": true, 00:26:17.571 "get_zone_info": false, 00:26:17.571 "zone_management": false, 00:26:17.571 "zone_append": false, 00:26:17.571 "compare": false, 00:26:17.571 "compare_and_write": false, 00:26:17.571 "abort": true, 00:26:17.571 "seek_hole": false, 00:26:17.571 "seek_data": false, 00:26:17.571 "copy": true, 00:26:17.571 "nvme_iov_md": false 00:26:17.571 }, 00:26:17.571 "memory_domains": [ 00:26:17.571 { 00:26:17.571 "dma_device_id": "system", 00:26:17.571 "dma_device_type": 1 00:26:17.571 }, 00:26:17.571 { 00:26:17.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.571 "dma_device_type": 2 00:26:17.571 } 00:26:17.571 ], 00:26:17.571 "driver_specific": { 00:26:17.571 "passthru": { 00:26:17.571 "name": "pt1", 00:26:17.571 "base_bdev_name": "malloc1" 00:26:17.571 } 00:26:17.571 } 00:26:17.571 }' 00:26:17.571 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.571 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.830 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:17.830 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.830 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.830 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:17.830 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.830 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.830 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.830 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.830 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.090 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.090 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.090 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:18.090 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.090 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.090 "name": "pt2", 00:26:18.090 "aliases": [ 00:26:18.090 "00000000-0000-0000-0000-000000000002" 00:26:18.090 ], 00:26:18.090 "product_name": "passthru", 00:26:18.090 "block_size": 512, 00:26:18.090 "num_blocks": 65536, 00:26:18.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:18.090 "assigned_rate_limits": { 00:26:18.090 "rw_ios_per_sec": 0, 00:26:18.090 "rw_mbytes_per_sec": 0, 00:26:18.090 "r_mbytes_per_sec": 0, 00:26:18.090 "w_mbytes_per_sec": 0 00:26:18.090 }, 00:26:18.090 "claimed": true, 00:26:18.090 "claim_type": "exclusive_write", 00:26:18.090 "zoned": false, 00:26:18.090 "supported_io_types": { 00:26:18.090 "read": true, 00:26:18.090 "write": true, 00:26:18.090 "unmap": true, 00:26:18.090 "flush": true, 00:26:18.090 "reset": true, 00:26:18.090 "nvme_admin": false, 00:26:18.090 "nvme_io": false, 00:26:18.090 "nvme_io_md": false, 00:26:18.090 "write_zeroes": true, 00:26:18.090 "zcopy": true, 00:26:18.090 "get_zone_info": false, 00:26:18.090 "zone_management": false, 00:26:18.090 "zone_append": false, 00:26:18.090 "compare": false, 00:26:18.090 "compare_and_write": false, 00:26:18.090 "abort": true, 00:26:18.090 "seek_hole": false, 00:26:18.090 "seek_data": false, 00:26:18.090 "copy": true, 00:26:18.090 "nvme_iov_md": false 00:26:18.090 }, 00:26:18.090 "memory_domains": [ 00:26:18.090 { 00:26:18.090 "dma_device_id": "system", 00:26:18.090 "dma_device_type": 1 00:26:18.090 }, 00:26:18.090 { 00:26:18.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.090 "dma_device_type": 2 00:26:18.090 } 00:26:18.090 ], 00:26:18.090 "driver_specific": { 00:26:18.090 "passthru": { 00:26:18.090 "name": "pt2", 00:26:18.090 "base_bdev_name": "malloc2" 00:26:18.090 } 00:26:18.090 } 00:26:18.090 }' 00:26:18.090 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.349 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.349 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.349 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.349 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.349 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.349 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.349 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.349 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.349 00:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.608 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.608 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.608 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.608 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:18.608 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.867 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.867 "name": "pt3", 00:26:18.867 "aliases": [ 00:26:18.867 "00000000-0000-0000-0000-000000000003" 00:26:18.867 ], 00:26:18.867 "product_name": "passthru", 00:26:18.867 "block_size": 512, 00:26:18.867 "num_blocks": 65536, 00:26:18.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:18.867 "assigned_rate_limits": { 00:26:18.867 "rw_ios_per_sec": 0, 00:26:18.867 "rw_mbytes_per_sec": 0, 00:26:18.867 "r_mbytes_per_sec": 0, 00:26:18.867 "w_mbytes_per_sec": 0 00:26:18.867 }, 00:26:18.867 "claimed": true, 00:26:18.867 "claim_type": "exclusive_write", 00:26:18.867 "zoned": false, 00:26:18.867 "supported_io_types": { 00:26:18.867 "read": true, 00:26:18.867 "write": true, 00:26:18.867 "unmap": true, 00:26:18.867 "flush": true, 00:26:18.867 "reset": true, 00:26:18.867 "nvme_admin": false, 00:26:18.867 "nvme_io": false, 00:26:18.867 "nvme_io_md": false, 00:26:18.867 "write_zeroes": true, 00:26:18.867 "zcopy": true, 00:26:18.867 "get_zone_info": false, 00:26:18.867 "zone_management": false, 00:26:18.867 "zone_append": false, 00:26:18.867 "compare": false, 00:26:18.867 "compare_and_write": false, 00:26:18.867 "abort": true, 00:26:18.867 "seek_hole": false, 00:26:18.867 "seek_data": false, 00:26:18.867 "copy": true, 00:26:18.867 "nvme_iov_md": false 00:26:18.867 }, 00:26:18.867 "memory_domains": [ 00:26:18.867 { 00:26:18.867 "dma_device_id": "system", 00:26:18.867 "dma_device_type": 1 00:26:18.867 }, 00:26:18.867 { 00:26:18.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.867 "dma_device_type": 2 00:26:18.867 } 00:26:18.867 ], 00:26:18.867 "driver_specific": { 00:26:18.868 "passthru": { 00:26:18.868 "name": "pt3", 00:26:18.868 "base_bdev_name": "malloc3" 00:26:18.868 } 00:26:18.868 } 00:26:18.868 }' 00:26:18.868 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.868 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.868 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.868 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.868 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.868 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.868 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.127 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.127 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:19.127 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.127 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.127 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:19.127 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:19.127 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:19.127 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:19.385 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:19.385 "name": "pt4", 00:26:19.385 "aliases": [ 00:26:19.385 "00000000-0000-0000-0000-000000000004" 00:26:19.385 ], 00:26:19.385 "product_name": "passthru", 00:26:19.385 "block_size": 512, 00:26:19.385 "num_blocks": 65536, 00:26:19.385 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:19.385 "assigned_rate_limits": { 00:26:19.385 "rw_ios_per_sec": 0, 00:26:19.385 "rw_mbytes_per_sec": 0, 00:26:19.385 "r_mbytes_per_sec": 0, 00:26:19.385 "w_mbytes_per_sec": 0 00:26:19.385 }, 00:26:19.385 "claimed": true, 00:26:19.385 "claim_type": "exclusive_write", 00:26:19.385 "zoned": false, 00:26:19.385 "supported_io_types": { 00:26:19.385 "read": true, 00:26:19.385 "write": true, 00:26:19.385 "unmap": true, 00:26:19.385 "flush": true, 00:26:19.385 "reset": true, 00:26:19.385 "nvme_admin": false, 00:26:19.385 "nvme_io": false, 00:26:19.385 "nvme_io_md": false, 00:26:19.385 "write_zeroes": true, 00:26:19.385 "zcopy": true, 00:26:19.385 "get_zone_info": false, 00:26:19.385 "zone_management": false, 00:26:19.385 "zone_append": false, 00:26:19.385 "compare": false, 00:26:19.385 "compare_and_write": false, 00:26:19.385 "abort": true, 00:26:19.385 "seek_hole": false, 00:26:19.385 "seek_data": false, 00:26:19.385 "copy": true, 00:26:19.385 "nvme_iov_md": false 00:26:19.385 }, 00:26:19.385 "memory_domains": [ 00:26:19.385 { 00:26:19.385 "dma_device_id": "system", 00:26:19.385 "dma_device_type": 1 00:26:19.385 }, 00:26:19.385 { 00:26:19.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.385 "dma_device_type": 2 00:26:19.385 } 00:26:19.385 ], 00:26:19.385 "driver_specific": { 00:26:19.385 "passthru": { 00:26:19.385 "name": "pt4", 00:26:19.385 "base_bdev_name": "malloc4" 00:26:19.385 } 00:26:19.385 } 00:26:19.385 }' 00:26:19.385 00:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:19.385 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:19.644 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:19.644 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.644 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.644 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:19.644 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.644 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.644 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:19.644 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.903 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.903 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:19.903 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:26:19.903 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:20.161 [2024-07-25 00:52:42.598948] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:20.161 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 6c237658-1604-48ef-b0fb-f590bab3cc73 '!=' 6c237658-1604-48ef-b0fb-f590bab3cc73 ']' 00:26:20.161 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:26:20.161 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:20.161 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:26:20.161 00:52:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 140214 00:26:20.161 00:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 140214 ']' 00:26:20.161 00:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 140214 00:26:20.161 00:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:26:20.161 00:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.161 00:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140214 00:26:20.161 00:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:20.162 00:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:20.162 00:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140214' 00:26:20.162 killing process with pid 140214 00:26:20.162 00:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 140214 00:26:20.162 00:52:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 140214 00:26:20.162 [2024-07-25 00:52:42.647515] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:20.162 [2024-07-25 00:52:42.647648] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:20.162 [2024-07-25 00:52:42.647776] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:20.162 [2024-07-25 00:52:42.647787] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:26:20.420 [2024-07-25 00:52:43.047734] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:21.792 00:52:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:26:21.792 00:26:21.792 real 0m16.633s 00:26:21.792 user 0m28.823s 00:26:21.792 sys 0m2.514s 00:26:21.792 00:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.792 00:52:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.792 ************************************ 00:26:21.792 END TEST raid_superblock_test 00:26:21.792 ************************************ 00:26:21.792 00:52:44 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:26:21.792 00:52:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:21.792 00:52:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.792 00:52:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:22.051 ************************************ 00:26:22.051 START TEST raid_read_error_test 00:26:22.051 ************************************ 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.lmoDYn0rGy 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=140759 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 140759 /var/tmp/spdk-raid.sock 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 140759 ']' 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:22.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.051 00:52:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.051 [2024-07-25 00:52:44.552278] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:26:22.051 [2024-07-25 00:52:44.552492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140759 ] 00:26:22.310 [2024-07-25 00:52:44.732666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.310 [2024-07-25 00:52:44.929157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.568 [2024-07-25 00:52:45.119732] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:23.141 00:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:23.141 00:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:26:23.141 00:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:23.141 00:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:23.141 BaseBdev1_malloc 00:26:23.399 00:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:23.399 true 00:26:23.399 00:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:23.657 [2024-07-25 00:52:46.145189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:23.657 [2024-07-25 00:52:46.145313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.657 [2024-07-25 00:52:46.145349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:26:23.657 [2024-07-25 00:52:46.145375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.657 [2024-07-25 00:52:46.147632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.657 [2024-07-25 00:52:46.147681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:23.657 BaseBdev1 00:26:23.657 00:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:23.657 00:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:23.914 BaseBdev2_malloc 00:26:23.914 00:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:24.172 true 00:26:24.172 00:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:24.172 [2024-07-25 00:52:46.742082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:24.172 [2024-07-25 00:52:46.742192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.172 [2024-07-25 00:52:46.742242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:24.172 [2024-07-25 00:52:46.742262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.172 [2024-07-25 00:52:46.744461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.172 [2024-07-25 00:52:46.744508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:24.172 BaseBdev2 00:26:24.172 00:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:24.172 00:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:24.430 BaseBdev3_malloc 00:26:24.430 00:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:24.689 true 00:26:24.689 00:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:24.689 [2024-07-25 00:52:47.305337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:24.689 [2024-07-25 00:52:47.305433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.689 [2024-07-25 00:52:47.305472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:26:24.689 [2024-07-25 00:52:47.305497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.689 [2024-07-25 00:52:47.307749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.689 [2024-07-25 00:52:47.307804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:24.689 BaseBdev3 00:26:24.689 00:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:24.689 00:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:24.948 BaseBdev4_malloc 00:26:25.206 00:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:25.206 true 00:26:25.206 00:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:25.464 [2024-07-25 00:52:47.956650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:25.464 [2024-07-25 00:52:47.956741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.464 [2024-07-25 00:52:47.956800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:25.464 [2024-07-25 00:52:47.956825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.464 [2024-07-25 00:52:47.959047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.464 [2024-07-25 00:52:47.959103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:25.464 BaseBdev4 00:26:25.464 00:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:25.722 [2024-07-25 00:52:48.136720] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:25.722 [2024-07-25 00:52:48.138681] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:25.722 [2024-07-25 00:52:48.138766] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:25.722 [2024-07-25 00:52:48.138819] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:25.722 [2024-07-25 00:52:48.139065] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:26:25.722 [2024-07-25 00:52:48.139081] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:25.722 [2024-07-25 00:52:48.139207] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:25.722 [2024-07-25 00:52:48.139589] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:26:25.722 [2024-07-25 00:52:48.139609] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:26:25.722 [2024-07-25 00:52:48.139730] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.722 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.980 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.980 "name": "raid_bdev1", 00:26:25.980 "uuid": "89e331d4-3824-4b1e-83bb-67223fb5cb88", 00:26:25.980 "strip_size_kb": 64, 00:26:25.980 "state": "online", 00:26:25.980 "raid_level": "concat", 00:26:25.980 "superblock": true, 00:26:25.980 "num_base_bdevs": 4, 00:26:25.981 "num_base_bdevs_discovered": 4, 00:26:25.981 "num_base_bdevs_operational": 4, 00:26:25.981 "base_bdevs_list": [ 00:26:25.981 { 00:26:25.981 "name": "BaseBdev1", 00:26:25.981 "uuid": "eb466f4b-abc6-566c-a4f8-4ef86ca8baea", 00:26:25.981 "is_configured": true, 00:26:25.981 "data_offset": 2048, 00:26:25.981 "data_size": 63488 00:26:25.981 }, 00:26:25.981 { 00:26:25.981 "name": "BaseBdev2", 00:26:25.981 "uuid": "9216b0e8-5966-5ac8-afd0-fcfc81d6aaa7", 00:26:25.981 "is_configured": true, 00:26:25.981 "data_offset": 2048, 00:26:25.981 "data_size": 63488 00:26:25.981 }, 00:26:25.981 { 00:26:25.981 "name": "BaseBdev3", 00:26:25.981 "uuid": "34278cfe-f4c9-57f8-86f0-0762cc3da229", 00:26:25.981 "is_configured": true, 00:26:25.981 "data_offset": 2048, 00:26:25.981 "data_size": 63488 00:26:25.981 }, 00:26:25.981 { 00:26:25.981 "name": "BaseBdev4", 00:26:25.981 "uuid": "bf1eb532-e1be-52c3-9134-53fd18423c65", 00:26:25.981 "is_configured": true, 00:26:25.981 "data_offset": 2048, 00:26:25.981 "data_size": 63488 00:26:25.981 } 00:26:25.981 ] 00:26:25.981 }' 00:26:25.981 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.981 00:52:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.548 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:26.548 00:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:26.548 [2024-07-25 00:52:49.102113] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:27.482 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.741 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.000 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.000 "name": "raid_bdev1", 00:26:28.000 "uuid": "89e331d4-3824-4b1e-83bb-67223fb5cb88", 00:26:28.000 "strip_size_kb": 64, 00:26:28.000 "state": "online", 00:26:28.000 "raid_level": "concat", 00:26:28.000 "superblock": true, 00:26:28.000 "num_base_bdevs": 4, 00:26:28.000 "num_base_bdevs_discovered": 4, 00:26:28.000 "num_base_bdevs_operational": 4, 00:26:28.000 "base_bdevs_list": [ 00:26:28.000 { 00:26:28.000 "name": "BaseBdev1", 00:26:28.000 "uuid": "eb466f4b-abc6-566c-a4f8-4ef86ca8baea", 00:26:28.000 "is_configured": true, 00:26:28.000 "data_offset": 2048, 00:26:28.000 "data_size": 63488 00:26:28.000 }, 00:26:28.000 { 00:26:28.000 "name": "BaseBdev2", 00:26:28.000 "uuid": "9216b0e8-5966-5ac8-afd0-fcfc81d6aaa7", 00:26:28.000 "is_configured": true, 00:26:28.000 "data_offset": 2048, 00:26:28.000 "data_size": 63488 00:26:28.000 }, 00:26:28.000 { 00:26:28.000 "name": "BaseBdev3", 00:26:28.000 "uuid": "34278cfe-f4c9-57f8-86f0-0762cc3da229", 00:26:28.000 "is_configured": true, 00:26:28.000 "data_offset": 2048, 00:26:28.000 "data_size": 63488 00:26:28.000 }, 00:26:28.000 { 00:26:28.000 "name": "BaseBdev4", 00:26:28.000 "uuid": "bf1eb532-e1be-52c3-9134-53fd18423c65", 00:26:28.000 "is_configured": true, 00:26:28.000 "data_offset": 2048, 00:26:28.000 "data_size": 63488 00:26:28.000 } 00:26:28.000 ] 00:26:28.000 }' 00:26:28.000 00:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.000 00:52:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.568 00:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:28.568 [2024-07-25 00:52:51.189594] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:28.568 [2024-07-25 00:52:51.189879] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:28.568 [2024-07-25 00:52:51.192384] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:28.568 [2024-07-25 00:52:51.192562] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:28.568 [2024-07-25 00:52:51.192636] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:28.568 [2024-07-25 00:52:51.192713] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:26:28.568 0 00:26:28.568 00:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 140759 00:26:28.568 00:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 140759 ']' 00:26:28.568 00:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 140759 00:26:28.568 00:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:26:28.568 00:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.826 00:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140759 00:26:28.826 00:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:28.826 00:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:28.826 00:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140759' 00:26:28.826 killing process with pid 140759 00:26:28.826 00:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 140759 00:26:28.826 00:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 140759 00:26:28.826 [2024-07-25 00:52:51.234544] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:29.085 [2024-07-25 00:52:51.544946] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:30.461 00:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:30.461 00:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.lmoDYn0rGy 00:26:30.461 00:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:30.461 ************************************ 00:26:30.461 END TEST raid_read_error_test 00:26:30.461 ************************************ 00:26:30.461 00:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:26:30.461 00:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:26:30.461 00:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:30.461 00:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:26:30.461 00:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:26:30.461 00:26:30.461 real 0m8.401s 00:26:30.461 user 0m12.528s 00:26:30.461 sys 0m1.062s 00:26:30.461 00:52:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:30.461 00:52:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.461 00:52:52 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:26:30.461 00:52:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:30.461 00:52:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:30.461 00:52:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:30.461 ************************************ 00:26:30.461 START TEST raid_write_error_test 00:26:30.461 ************************************ 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.sk13ghU8Gh 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=140963 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 140963 /var/tmp/spdk-raid.sock 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 140963 ']' 00:26:30.461 00:52:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:30.462 00:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:30.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:30.462 00:52:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:30.462 00:52:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:30.462 00:52:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:30.462 00:52:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.462 [2024-07-25 00:52:53.029888] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:26:30.462 [2024-07-25 00:52:53.030374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140963 ] 00:26:30.720 [2024-07-25 00:52:53.208469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.977 [2024-07-25 00:52:53.404991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.977 [2024-07-25 00:52:53.588220] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:31.541 00:52:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:31.541 00:52:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:26:31.541 00:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:31.541 00:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:31.541 BaseBdev1_malloc 00:26:31.541 00:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:31.799 true 00:26:31.799 00:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:32.057 [2024-07-25 00:52:54.515683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:32.057 [2024-07-25 00:52:54.515936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.057 [2024-07-25 00:52:54.516008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:26:32.057 [2024-07-25 00:52:54.516109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.057 [2024-07-25 00:52:54.518489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.057 [2024-07-25 00:52:54.518649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:32.057 BaseBdev1 00:26:32.057 00:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:32.057 00:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:32.315 BaseBdev2_malloc 00:26:32.315 00:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:32.315 true 00:26:32.315 00:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:32.572 [2024-07-25 00:52:55.105996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:32.572 [2024-07-25 00:52:55.106385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.572 [2024-07-25 00:52:55.106463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:32.572 [2024-07-25 00:52:55.106714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.572 [2024-07-25 00:52:55.109044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.572 [2024-07-25 00:52:55.109221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:32.572 BaseBdev2 00:26:32.572 00:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:32.572 00:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:32.830 BaseBdev3_malloc 00:26:32.830 00:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:33.088 true 00:26:33.088 00:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:33.346 [2024-07-25 00:52:55.747897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:33.346 [2024-07-25 00:52:55.748141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.346 [2024-07-25 00:52:55.748212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:26:33.346 [2024-07-25 00:52:55.748313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.346 [2024-07-25 00:52:55.750631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.346 [2024-07-25 00:52:55.750793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:33.346 BaseBdev3 00:26:33.346 00:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:33.346 00:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:33.346 BaseBdev4_malloc 00:26:33.346 00:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:33.604 true 00:26:33.604 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:33.862 [2024-07-25 00:52:56.397101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:33.862 [2024-07-25 00:52:56.397444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.862 [2024-07-25 00:52:56.397534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:33.862 [2024-07-25 00:52:56.397639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.862 [2024-07-25 00:52:56.399905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.862 [2024-07-25 00:52:56.400063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:33.862 BaseBdev4 00:26:33.863 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:34.121 [2024-07-25 00:52:56.633192] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:34.121 [2024-07-25 00:52:56.635361] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:34.121 [2024-07-25 00:52:56.635575] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:34.121 [2024-07-25 00:52:56.635663] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:34.121 [2024-07-25 00:52:56.636059] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:26:34.121 [2024-07-25 00:52:56.636101] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:34.121 [2024-07-25 00:52:56.636316] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:34.121 [2024-07-25 00:52:56.636822] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:26:34.121 [2024-07-25 00:52:56.636933] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:26:34.121 [2024-07-25 00:52:56.637185] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.121 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.379 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.379 "name": "raid_bdev1", 00:26:34.379 "uuid": "48a5b938-def5-4eb5-9ae6-de446447bf5f", 00:26:34.379 "strip_size_kb": 64, 00:26:34.379 "state": "online", 00:26:34.379 "raid_level": "concat", 00:26:34.379 "superblock": true, 00:26:34.379 "num_base_bdevs": 4, 00:26:34.379 "num_base_bdevs_discovered": 4, 00:26:34.379 "num_base_bdevs_operational": 4, 00:26:34.379 "base_bdevs_list": [ 00:26:34.379 { 00:26:34.379 "name": "BaseBdev1", 00:26:34.379 "uuid": "9565086d-2ec0-5f79-bfc6-940cbaaa192c", 00:26:34.379 "is_configured": true, 00:26:34.379 "data_offset": 2048, 00:26:34.379 "data_size": 63488 00:26:34.379 }, 00:26:34.379 { 00:26:34.379 "name": "BaseBdev2", 00:26:34.379 "uuid": "1c2648f3-47a0-537c-9a89-0ada676fb5df", 00:26:34.379 "is_configured": true, 00:26:34.379 "data_offset": 2048, 00:26:34.379 "data_size": 63488 00:26:34.379 }, 00:26:34.379 { 00:26:34.379 "name": "BaseBdev3", 00:26:34.379 "uuid": "ead9211d-8555-56a8-852c-cb5eed746682", 00:26:34.379 "is_configured": true, 00:26:34.379 "data_offset": 2048, 00:26:34.379 "data_size": 63488 00:26:34.379 }, 00:26:34.379 { 00:26:34.379 "name": "BaseBdev4", 00:26:34.379 "uuid": "58334996-ea7f-535a-b1e7-9b5eab5c818c", 00:26:34.379 "is_configured": true, 00:26:34.379 "data_offset": 2048, 00:26:34.379 "data_size": 63488 00:26:34.379 } 00:26:34.379 ] 00:26:34.379 }' 00:26:34.379 00:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.379 00:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.946 00:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:34.946 00:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:34.946 [2024-07-25 00:52:57.486730] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:35.890 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.148 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.406 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.406 "name": "raid_bdev1", 00:26:36.406 "uuid": "48a5b938-def5-4eb5-9ae6-de446447bf5f", 00:26:36.406 "strip_size_kb": 64, 00:26:36.406 "state": "online", 00:26:36.406 "raid_level": "concat", 00:26:36.406 "superblock": true, 00:26:36.406 "num_base_bdevs": 4, 00:26:36.406 "num_base_bdevs_discovered": 4, 00:26:36.406 "num_base_bdevs_operational": 4, 00:26:36.406 "base_bdevs_list": [ 00:26:36.406 { 00:26:36.406 "name": "BaseBdev1", 00:26:36.406 "uuid": "9565086d-2ec0-5f79-bfc6-940cbaaa192c", 00:26:36.406 "is_configured": true, 00:26:36.406 "data_offset": 2048, 00:26:36.406 "data_size": 63488 00:26:36.406 }, 00:26:36.406 { 00:26:36.406 "name": "BaseBdev2", 00:26:36.406 "uuid": "1c2648f3-47a0-537c-9a89-0ada676fb5df", 00:26:36.406 "is_configured": true, 00:26:36.406 "data_offset": 2048, 00:26:36.406 "data_size": 63488 00:26:36.406 }, 00:26:36.406 { 00:26:36.406 "name": "BaseBdev3", 00:26:36.406 "uuid": "ead9211d-8555-56a8-852c-cb5eed746682", 00:26:36.406 "is_configured": true, 00:26:36.406 "data_offset": 2048, 00:26:36.406 "data_size": 63488 00:26:36.406 }, 00:26:36.406 { 00:26:36.406 "name": "BaseBdev4", 00:26:36.406 "uuid": "58334996-ea7f-535a-b1e7-9b5eab5c818c", 00:26:36.406 "is_configured": true, 00:26:36.406 "data_offset": 2048, 00:26:36.406 "data_size": 63488 00:26:36.406 } 00:26:36.406 ] 00:26:36.406 }' 00:26:36.406 00:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.406 00:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.973 00:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:37.232 [2024-07-25 00:52:59.743635] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:37.232 [2024-07-25 00:52:59.743932] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:37.232 [2024-07-25 00:52:59.746722] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:37.232 [2024-07-25 00:52:59.746881] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:37.232 [2024-07-25 00:52:59.746951] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:37.232 [2024-07-25 00:52:59.747032] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:26:37.232 0 00:26:37.232 00:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 140963 00:26:37.232 00:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 140963 ']' 00:26:37.232 00:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 140963 00:26:37.232 00:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:26:37.232 00:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:37.232 00:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140963 00:26:37.232 00:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:37.232 00:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:37.232 00:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140963' 00:26:37.232 killing process with pid 140963 00:26:37.232 00:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 140963 00:26:37.232 [2024-07-25 00:52:59.802729] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:37.232 00:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 140963 00:26:37.491 [2024-07-25 00:53:00.119985] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:38.866 00:53:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.sk13ghU8Gh 00:26:38.866 00:53:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:38.866 00:53:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:38.866 00:53:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:26:38.866 00:53:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:26:38.866 00:53:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:38.866 00:53:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:26:38.866 00:53:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:26:38.866 00:26:38.866 real 0m8.521s 00:26:38.866 user 0m12.553s 00:26:38.866 sys 0m1.234s 00:26:38.866 00:53:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:38.866 ************************************ 00:26:38.866 END TEST raid_write_error_test 00:26:38.866 ************************************ 00:26:38.866 00:53:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.866 00:53:01 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:26:38.866 00:53:01 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:26:38.866 00:53:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:38.866 00:53:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.866 00:53:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:39.125 ************************************ 00:26:39.125 START TEST raid_state_function_test 00:26:39.125 ************************************ 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:26:39.125 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=141171 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 141171' 00:26:39.126 Process raid pid: 141171 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 141171 /var/tmp/spdk-raid.sock 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 141171 ']' 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:39.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.126 00:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.126 [2024-07-25 00:53:01.622937] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:26:39.126 [2024-07-25 00:53:01.623412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.384 [2024-07-25 00:53:01.804546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.384 [2024-07-25 00:53:02.015220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.642 [2024-07-25 00:53:02.213200] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:39.900 00:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.900 00:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:26:39.900 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:40.158 [2024-07-25 00:53:02.746350] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:40.158 [2024-07-25 00:53:02.746653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:40.158 [2024-07-25 00:53:02.746742] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:40.158 [2024-07-25 00:53:02.746800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:40.158 [2024-07-25 00:53:02.746830] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:40.158 [2024-07-25 00:53:02.746868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:40.158 [2024-07-25 00:53:02.746951] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:40.158 [2024-07-25 00:53:02.747003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.158 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:40.416 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.416 "name": "Existed_Raid", 00:26:40.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.416 "strip_size_kb": 0, 00:26:40.416 "state": "configuring", 00:26:40.416 "raid_level": "raid1", 00:26:40.416 "superblock": false, 00:26:40.416 "num_base_bdevs": 4, 00:26:40.416 "num_base_bdevs_discovered": 0, 00:26:40.416 "num_base_bdevs_operational": 4, 00:26:40.416 "base_bdevs_list": [ 00:26:40.416 { 00:26:40.416 "name": "BaseBdev1", 00:26:40.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.416 "is_configured": false, 00:26:40.416 "data_offset": 0, 00:26:40.416 "data_size": 0 00:26:40.416 }, 00:26:40.416 { 00:26:40.416 "name": "BaseBdev2", 00:26:40.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.416 "is_configured": false, 00:26:40.416 "data_offset": 0, 00:26:40.416 "data_size": 0 00:26:40.416 }, 00:26:40.416 { 00:26:40.416 "name": "BaseBdev3", 00:26:40.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.416 "is_configured": false, 00:26:40.416 "data_offset": 0, 00:26:40.416 "data_size": 0 00:26:40.416 }, 00:26:40.416 { 00:26:40.416 "name": "BaseBdev4", 00:26:40.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.416 "is_configured": false, 00:26:40.416 "data_offset": 0, 00:26:40.416 "data_size": 0 00:26:40.416 } 00:26:40.416 ] 00:26:40.416 }' 00:26:40.416 00:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.416 00:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.982 00:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:41.241 [2024-07-25 00:53:03.826909] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:41.241 [2024-07-25 00:53:03.827148] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:26:41.241 00:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:41.501 [2024-07-25 00:53:04.070931] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:41.501 [2024-07-25 00:53:04.071213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:41.501 [2024-07-25 00:53:04.071298] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:41.501 [2024-07-25 00:53:04.071377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:41.501 [2024-07-25 00:53:04.071406] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:41.501 [2024-07-25 00:53:04.071512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:41.501 [2024-07-25 00:53:04.071543] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:41.501 [2024-07-25 00:53:04.071584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:41.501 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:41.759 [2024-07-25 00:53:04.298367] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:41.759 BaseBdev1 00:26:41.759 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:41.759 00:53:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:41.759 00:53:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:41.759 00:53:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:41.759 00:53:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:41.759 00:53:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:41.759 00:53:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:42.018 00:53:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:42.276 [ 00:26:42.276 { 00:26:42.276 "name": "BaseBdev1", 00:26:42.276 "aliases": [ 00:26:42.276 "ee1453a1-62b7-4d4c-a088-7d7984d02254" 00:26:42.276 ], 00:26:42.276 "product_name": "Malloc disk", 00:26:42.276 "block_size": 512, 00:26:42.276 "num_blocks": 65536, 00:26:42.276 "uuid": "ee1453a1-62b7-4d4c-a088-7d7984d02254", 00:26:42.276 "assigned_rate_limits": { 00:26:42.276 "rw_ios_per_sec": 0, 00:26:42.276 "rw_mbytes_per_sec": 0, 00:26:42.276 "r_mbytes_per_sec": 0, 00:26:42.276 "w_mbytes_per_sec": 0 00:26:42.276 }, 00:26:42.276 "claimed": true, 00:26:42.276 "claim_type": "exclusive_write", 00:26:42.276 "zoned": false, 00:26:42.276 "supported_io_types": { 00:26:42.276 "read": true, 00:26:42.276 "write": true, 00:26:42.276 "unmap": true, 00:26:42.276 "flush": true, 00:26:42.276 "reset": true, 00:26:42.276 "nvme_admin": false, 00:26:42.276 "nvme_io": false, 00:26:42.276 "nvme_io_md": false, 00:26:42.276 "write_zeroes": true, 00:26:42.276 "zcopy": true, 00:26:42.276 "get_zone_info": false, 00:26:42.276 "zone_management": false, 00:26:42.276 "zone_append": false, 00:26:42.276 "compare": false, 00:26:42.276 "compare_and_write": false, 00:26:42.276 "abort": true, 00:26:42.276 "seek_hole": false, 00:26:42.276 "seek_data": false, 00:26:42.276 "copy": true, 00:26:42.276 "nvme_iov_md": false 00:26:42.276 }, 00:26:42.276 "memory_domains": [ 00:26:42.276 { 00:26:42.276 "dma_device_id": "system", 00:26:42.276 "dma_device_type": 1 00:26:42.276 }, 00:26:42.276 { 00:26:42.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.276 "dma_device_type": 2 00:26:42.276 } 00:26:42.276 ], 00:26:42.276 "driver_specific": {} 00:26:42.276 } 00:26:42.276 ] 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.276 00:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:42.535 00:53:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:42.535 "name": "Existed_Raid", 00:26:42.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.535 "strip_size_kb": 0, 00:26:42.535 "state": "configuring", 00:26:42.535 "raid_level": "raid1", 00:26:42.535 "superblock": false, 00:26:42.535 "num_base_bdevs": 4, 00:26:42.535 "num_base_bdevs_discovered": 1, 00:26:42.535 "num_base_bdevs_operational": 4, 00:26:42.535 "base_bdevs_list": [ 00:26:42.535 { 00:26:42.535 "name": "BaseBdev1", 00:26:42.535 "uuid": "ee1453a1-62b7-4d4c-a088-7d7984d02254", 00:26:42.535 "is_configured": true, 00:26:42.535 "data_offset": 0, 00:26:42.535 "data_size": 65536 00:26:42.535 }, 00:26:42.535 { 00:26:42.535 "name": "BaseBdev2", 00:26:42.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.535 "is_configured": false, 00:26:42.535 "data_offset": 0, 00:26:42.535 "data_size": 0 00:26:42.535 }, 00:26:42.535 { 00:26:42.535 "name": "BaseBdev3", 00:26:42.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.535 "is_configured": false, 00:26:42.535 "data_offset": 0, 00:26:42.535 "data_size": 0 00:26:42.535 }, 00:26:42.535 { 00:26:42.535 "name": "BaseBdev4", 00:26:42.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.535 "is_configured": false, 00:26:42.535 "data_offset": 0, 00:26:42.535 "data_size": 0 00:26:42.535 } 00:26:42.535 ] 00:26:42.535 }' 00:26:42.535 00:53:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:42.535 00:53:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.102 00:53:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:43.361 [2024-07-25 00:53:05.827025] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:43.361 [2024-07-25 00:53:05.827266] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:26:43.361 00:53:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:43.620 [2024-07-25 00:53:06.103084] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:43.620 [2024-07-25 00:53:06.105207] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:43.620 [2024-07-25 00:53:06.105379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:43.620 [2024-07-25 00:53:06.105476] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:43.620 [2024-07-25 00:53:06.105533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:43.620 [2024-07-25 00:53:06.105667] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:43.620 [2024-07-25 00:53:06.105711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.620 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:43.881 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:43.881 "name": "Existed_Raid", 00:26:43.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.881 "strip_size_kb": 0, 00:26:43.881 "state": "configuring", 00:26:43.881 "raid_level": "raid1", 00:26:43.881 "superblock": false, 00:26:43.881 "num_base_bdevs": 4, 00:26:43.881 "num_base_bdevs_discovered": 1, 00:26:43.881 "num_base_bdevs_operational": 4, 00:26:43.881 "base_bdevs_list": [ 00:26:43.881 { 00:26:43.881 "name": "BaseBdev1", 00:26:43.881 "uuid": "ee1453a1-62b7-4d4c-a088-7d7984d02254", 00:26:43.881 "is_configured": true, 00:26:43.881 "data_offset": 0, 00:26:43.881 "data_size": 65536 00:26:43.881 }, 00:26:43.881 { 00:26:43.881 "name": "BaseBdev2", 00:26:43.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.881 "is_configured": false, 00:26:43.881 "data_offset": 0, 00:26:43.881 "data_size": 0 00:26:43.881 }, 00:26:43.881 { 00:26:43.881 "name": "BaseBdev3", 00:26:43.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.881 "is_configured": false, 00:26:43.881 "data_offset": 0, 00:26:43.881 "data_size": 0 00:26:43.881 }, 00:26:43.881 { 00:26:43.881 "name": "BaseBdev4", 00:26:43.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.881 "is_configured": false, 00:26:43.881 "data_offset": 0, 00:26:43.881 "data_size": 0 00:26:43.881 } 00:26:43.881 ] 00:26:43.881 }' 00:26:43.881 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:43.881 00:53:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.449 00:53:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:44.449 [2024-07-25 00:53:07.053970] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:44.449 BaseBdev2 00:26:44.449 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:44.449 00:53:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:44.449 00:53:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:44.449 00:53:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:44.449 00:53:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:44.449 00:53:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:44.449 00:53:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:44.708 00:53:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:44.967 [ 00:26:44.967 { 00:26:44.967 "name": "BaseBdev2", 00:26:44.967 "aliases": [ 00:26:44.967 "16631322-cd24-4f18-b5d1-ed1180f83a70" 00:26:44.967 ], 00:26:44.967 "product_name": "Malloc disk", 00:26:44.967 "block_size": 512, 00:26:44.967 "num_blocks": 65536, 00:26:44.967 "uuid": "16631322-cd24-4f18-b5d1-ed1180f83a70", 00:26:44.967 "assigned_rate_limits": { 00:26:44.968 "rw_ios_per_sec": 0, 00:26:44.968 "rw_mbytes_per_sec": 0, 00:26:44.968 "r_mbytes_per_sec": 0, 00:26:44.968 "w_mbytes_per_sec": 0 00:26:44.968 }, 00:26:44.968 "claimed": true, 00:26:44.968 "claim_type": "exclusive_write", 00:26:44.968 "zoned": false, 00:26:44.968 "supported_io_types": { 00:26:44.968 "read": true, 00:26:44.968 "write": true, 00:26:44.968 "unmap": true, 00:26:44.968 "flush": true, 00:26:44.968 "reset": true, 00:26:44.968 "nvme_admin": false, 00:26:44.968 "nvme_io": false, 00:26:44.968 "nvme_io_md": false, 00:26:44.968 "write_zeroes": true, 00:26:44.968 "zcopy": true, 00:26:44.968 "get_zone_info": false, 00:26:44.968 "zone_management": false, 00:26:44.968 "zone_append": false, 00:26:44.968 "compare": false, 00:26:44.968 "compare_and_write": false, 00:26:44.968 "abort": true, 00:26:44.968 "seek_hole": false, 00:26:44.968 "seek_data": false, 00:26:44.968 "copy": true, 00:26:44.968 "nvme_iov_md": false 00:26:44.968 }, 00:26:44.968 "memory_domains": [ 00:26:44.968 { 00:26:44.968 "dma_device_id": "system", 00:26:44.968 "dma_device_type": 1 00:26:44.968 }, 00:26:44.968 { 00:26:44.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.968 "dma_device_type": 2 00:26:44.968 } 00:26:44.968 ], 00:26:44.968 "driver_specific": {} 00:26:44.968 } 00:26:44.968 ] 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.968 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:45.227 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:45.227 "name": "Existed_Raid", 00:26:45.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:45.227 "strip_size_kb": 0, 00:26:45.227 "state": "configuring", 00:26:45.227 "raid_level": "raid1", 00:26:45.227 "superblock": false, 00:26:45.227 "num_base_bdevs": 4, 00:26:45.227 "num_base_bdevs_discovered": 2, 00:26:45.227 "num_base_bdevs_operational": 4, 00:26:45.227 "base_bdevs_list": [ 00:26:45.227 { 00:26:45.227 "name": "BaseBdev1", 00:26:45.227 "uuid": "ee1453a1-62b7-4d4c-a088-7d7984d02254", 00:26:45.227 "is_configured": true, 00:26:45.227 "data_offset": 0, 00:26:45.227 "data_size": 65536 00:26:45.227 }, 00:26:45.227 { 00:26:45.227 "name": "BaseBdev2", 00:26:45.227 "uuid": "16631322-cd24-4f18-b5d1-ed1180f83a70", 00:26:45.227 "is_configured": true, 00:26:45.227 "data_offset": 0, 00:26:45.227 "data_size": 65536 00:26:45.227 }, 00:26:45.227 { 00:26:45.227 "name": "BaseBdev3", 00:26:45.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:45.227 "is_configured": false, 00:26:45.227 "data_offset": 0, 00:26:45.227 "data_size": 0 00:26:45.227 }, 00:26:45.227 { 00:26:45.227 "name": "BaseBdev4", 00:26:45.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:45.227 "is_configured": false, 00:26:45.227 "data_offset": 0, 00:26:45.227 "data_size": 0 00:26:45.227 } 00:26:45.227 ] 00:26:45.227 }' 00:26:45.227 00:53:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:45.227 00:53:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.795 00:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:46.055 [2024-07-25 00:53:08.627150] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:46.055 BaseBdev3 00:26:46.055 00:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:46.055 00:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:46.055 00:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:46.055 00:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:46.055 00:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:46.055 00:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:46.055 00:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:46.316 00:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:46.576 [ 00:26:46.576 { 00:26:46.576 "name": "BaseBdev3", 00:26:46.576 "aliases": [ 00:26:46.576 "c4187f07-3810-47db-b532-b187e51b3602" 00:26:46.576 ], 00:26:46.576 "product_name": "Malloc disk", 00:26:46.576 "block_size": 512, 00:26:46.576 "num_blocks": 65536, 00:26:46.576 "uuid": "c4187f07-3810-47db-b532-b187e51b3602", 00:26:46.576 "assigned_rate_limits": { 00:26:46.576 "rw_ios_per_sec": 0, 00:26:46.576 "rw_mbytes_per_sec": 0, 00:26:46.576 "r_mbytes_per_sec": 0, 00:26:46.576 "w_mbytes_per_sec": 0 00:26:46.576 }, 00:26:46.576 "claimed": true, 00:26:46.576 "claim_type": "exclusive_write", 00:26:46.576 "zoned": false, 00:26:46.576 "supported_io_types": { 00:26:46.576 "read": true, 00:26:46.576 "write": true, 00:26:46.576 "unmap": true, 00:26:46.576 "flush": true, 00:26:46.576 "reset": true, 00:26:46.576 "nvme_admin": false, 00:26:46.576 "nvme_io": false, 00:26:46.576 "nvme_io_md": false, 00:26:46.576 "write_zeroes": true, 00:26:46.576 "zcopy": true, 00:26:46.576 "get_zone_info": false, 00:26:46.576 "zone_management": false, 00:26:46.576 "zone_append": false, 00:26:46.576 "compare": false, 00:26:46.576 "compare_and_write": false, 00:26:46.576 "abort": true, 00:26:46.576 "seek_hole": false, 00:26:46.576 "seek_data": false, 00:26:46.576 "copy": true, 00:26:46.576 "nvme_iov_md": false 00:26:46.576 }, 00:26:46.576 "memory_domains": [ 00:26:46.576 { 00:26:46.576 "dma_device_id": "system", 00:26:46.576 "dma_device_type": 1 00:26:46.576 }, 00:26:46.576 { 00:26:46.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:46.576 "dma_device_type": 2 00:26:46.576 } 00:26:46.576 ], 00:26:46.576 "driver_specific": {} 00:26:46.576 } 00:26:46.576 ] 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.576 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.577 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:46.577 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.836 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.836 "name": "Existed_Raid", 00:26:46.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.836 "strip_size_kb": 0, 00:26:46.836 "state": "configuring", 00:26:46.836 "raid_level": "raid1", 00:26:46.836 "superblock": false, 00:26:46.836 "num_base_bdevs": 4, 00:26:46.836 "num_base_bdevs_discovered": 3, 00:26:46.836 "num_base_bdevs_operational": 4, 00:26:46.836 "base_bdevs_list": [ 00:26:46.836 { 00:26:46.836 "name": "BaseBdev1", 00:26:46.836 "uuid": "ee1453a1-62b7-4d4c-a088-7d7984d02254", 00:26:46.836 "is_configured": true, 00:26:46.836 "data_offset": 0, 00:26:46.836 "data_size": 65536 00:26:46.836 }, 00:26:46.836 { 00:26:46.836 "name": "BaseBdev2", 00:26:46.836 "uuid": "16631322-cd24-4f18-b5d1-ed1180f83a70", 00:26:46.836 "is_configured": true, 00:26:46.836 "data_offset": 0, 00:26:46.836 "data_size": 65536 00:26:46.836 }, 00:26:46.836 { 00:26:46.836 "name": "BaseBdev3", 00:26:46.836 "uuid": "c4187f07-3810-47db-b532-b187e51b3602", 00:26:46.836 "is_configured": true, 00:26:46.836 "data_offset": 0, 00:26:46.836 "data_size": 65536 00:26:46.836 }, 00:26:46.836 { 00:26:46.836 "name": "BaseBdev4", 00:26:46.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.836 "is_configured": false, 00:26:46.836 "data_offset": 0, 00:26:46.836 "data_size": 0 00:26:46.836 } 00:26:46.836 ] 00:26:46.836 }' 00:26:46.836 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.836 00:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.404 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:47.404 [2024-07-25 00:53:09.975441] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:47.404 [2024-07-25 00:53:09.975751] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:26:47.404 [2024-07-25 00:53:09.975794] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:47.404 [2024-07-25 00:53:09.976019] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:26:47.404 [2024-07-25 00:53:09.976445] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:26:47.404 [2024-07-25 00:53:09.976555] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:26:47.404 [2024-07-25 00:53:09.976869] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:47.404 BaseBdev4 00:26:47.404 00:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:47.404 00:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:47.404 00:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:47.404 00:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:47.404 00:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:47.404 00:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:47.404 00:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:47.662 00:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:47.921 [ 00:26:47.921 { 00:26:47.921 "name": "BaseBdev4", 00:26:47.921 "aliases": [ 00:26:47.921 "aec4fd3d-0bac-4af6-aa20-6299bcb4eed2" 00:26:47.921 ], 00:26:47.921 "product_name": "Malloc disk", 00:26:47.921 "block_size": 512, 00:26:47.921 "num_blocks": 65536, 00:26:47.921 "uuid": "aec4fd3d-0bac-4af6-aa20-6299bcb4eed2", 00:26:47.921 "assigned_rate_limits": { 00:26:47.921 "rw_ios_per_sec": 0, 00:26:47.921 "rw_mbytes_per_sec": 0, 00:26:47.921 "r_mbytes_per_sec": 0, 00:26:47.921 "w_mbytes_per_sec": 0 00:26:47.921 }, 00:26:47.921 "claimed": true, 00:26:47.921 "claim_type": "exclusive_write", 00:26:47.921 "zoned": false, 00:26:47.921 "supported_io_types": { 00:26:47.921 "read": true, 00:26:47.921 "write": true, 00:26:47.921 "unmap": true, 00:26:47.921 "flush": true, 00:26:47.921 "reset": true, 00:26:47.921 "nvme_admin": false, 00:26:47.921 "nvme_io": false, 00:26:47.921 "nvme_io_md": false, 00:26:47.921 "write_zeroes": true, 00:26:47.921 "zcopy": true, 00:26:47.921 "get_zone_info": false, 00:26:47.921 "zone_management": false, 00:26:47.921 "zone_append": false, 00:26:47.921 "compare": false, 00:26:47.921 "compare_and_write": false, 00:26:47.921 "abort": true, 00:26:47.921 "seek_hole": false, 00:26:47.921 "seek_data": false, 00:26:47.921 "copy": true, 00:26:47.921 "nvme_iov_md": false 00:26:47.921 }, 00:26:47.921 "memory_domains": [ 00:26:47.921 { 00:26:47.921 "dma_device_id": "system", 00:26:47.921 "dma_device_type": 1 00:26:47.921 }, 00:26:47.921 { 00:26:47.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.921 "dma_device_type": 2 00:26:47.921 } 00:26:47.921 ], 00:26:47.921 "driver_specific": {} 00:26:47.921 } 00:26:47.921 ] 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.921 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:48.180 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:48.180 "name": "Existed_Raid", 00:26:48.180 "uuid": "d5c45303-c10d-42c1-98f9-06ad6533a2ad", 00:26:48.180 "strip_size_kb": 0, 00:26:48.180 "state": "online", 00:26:48.180 "raid_level": "raid1", 00:26:48.180 "superblock": false, 00:26:48.180 "num_base_bdevs": 4, 00:26:48.180 "num_base_bdevs_discovered": 4, 00:26:48.180 "num_base_bdevs_operational": 4, 00:26:48.180 "base_bdevs_list": [ 00:26:48.180 { 00:26:48.180 "name": "BaseBdev1", 00:26:48.180 "uuid": "ee1453a1-62b7-4d4c-a088-7d7984d02254", 00:26:48.180 "is_configured": true, 00:26:48.180 "data_offset": 0, 00:26:48.180 "data_size": 65536 00:26:48.180 }, 00:26:48.180 { 00:26:48.180 "name": "BaseBdev2", 00:26:48.180 "uuid": "16631322-cd24-4f18-b5d1-ed1180f83a70", 00:26:48.180 "is_configured": true, 00:26:48.180 "data_offset": 0, 00:26:48.180 "data_size": 65536 00:26:48.180 }, 00:26:48.180 { 00:26:48.180 "name": "BaseBdev3", 00:26:48.180 "uuid": "c4187f07-3810-47db-b532-b187e51b3602", 00:26:48.180 "is_configured": true, 00:26:48.180 "data_offset": 0, 00:26:48.180 "data_size": 65536 00:26:48.180 }, 00:26:48.180 { 00:26:48.180 "name": "BaseBdev4", 00:26:48.180 "uuid": "aec4fd3d-0bac-4af6-aa20-6299bcb4eed2", 00:26:48.180 "is_configured": true, 00:26:48.180 "data_offset": 0, 00:26:48.180 "data_size": 65536 00:26:48.180 } 00:26:48.180 ] 00:26:48.180 }' 00:26:48.180 00:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:48.180 00:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.748 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:48.748 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:48.748 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:48.748 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:48.748 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:48.748 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:48.748 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:48.748 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:49.007 [2024-07-25 00:53:11.532363] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:49.007 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:49.007 "name": "Existed_Raid", 00:26:49.007 "aliases": [ 00:26:49.007 "d5c45303-c10d-42c1-98f9-06ad6533a2ad" 00:26:49.007 ], 00:26:49.007 "product_name": "Raid Volume", 00:26:49.007 "block_size": 512, 00:26:49.007 "num_blocks": 65536, 00:26:49.007 "uuid": "d5c45303-c10d-42c1-98f9-06ad6533a2ad", 00:26:49.007 "assigned_rate_limits": { 00:26:49.007 "rw_ios_per_sec": 0, 00:26:49.007 "rw_mbytes_per_sec": 0, 00:26:49.007 "r_mbytes_per_sec": 0, 00:26:49.007 "w_mbytes_per_sec": 0 00:26:49.007 }, 00:26:49.007 "claimed": false, 00:26:49.007 "zoned": false, 00:26:49.007 "supported_io_types": { 00:26:49.007 "read": true, 00:26:49.007 "write": true, 00:26:49.007 "unmap": false, 00:26:49.007 "flush": false, 00:26:49.007 "reset": true, 00:26:49.007 "nvme_admin": false, 00:26:49.007 "nvme_io": false, 00:26:49.007 "nvme_io_md": false, 00:26:49.007 "write_zeroes": true, 00:26:49.007 "zcopy": false, 00:26:49.007 "get_zone_info": false, 00:26:49.007 "zone_management": false, 00:26:49.008 "zone_append": false, 00:26:49.008 "compare": false, 00:26:49.008 "compare_and_write": false, 00:26:49.008 "abort": false, 00:26:49.008 "seek_hole": false, 00:26:49.008 "seek_data": false, 00:26:49.008 "copy": false, 00:26:49.008 "nvme_iov_md": false 00:26:49.008 }, 00:26:49.008 "memory_domains": [ 00:26:49.008 { 00:26:49.008 "dma_device_id": "system", 00:26:49.008 "dma_device_type": 1 00:26:49.008 }, 00:26:49.008 { 00:26:49.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.008 "dma_device_type": 2 00:26:49.008 }, 00:26:49.008 { 00:26:49.008 "dma_device_id": "system", 00:26:49.008 "dma_device_type": 1 00:26:49.008 }, 00:26:49.008 { 00:26:49.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.008 "dma_device_type": 2 00:26:49.008 }, 00:26:49.008 { 00:26:49.008 "dma_device_id": "system", 00:26:49.008 "dma_device_type": 1 00:26:49.008 }, 00:26:49.008 { 00:26:49.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.008 "dma_device_type": 2 00:26:49.008 }, 00:26:49.008 { 00:26:49.008 "dma_device_id": "system", 00:26:49.008 "dma_device_type": 1 00:26:49.008 }, 00:26:49.008 { 00:26:49.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.008 "dma_device_type": 2 00:26:49.008 } 00:26:49.008 ], 00:26:49.008 "driver_specific": { 00:26:49.008 "raid": { 00:26:49.008 "uuid": "d5c45303-c10d-42c1-98f9-06ad6533a2ad", 00:26:49.008 "strip_size_kb": 0, 00:26:49.008 "state": "online", 00:26:49.008 "raid_level": "raid1", 00:26:49.008 "superblock": false, 00:26:49.008 "num_base_bdevs": 4, 00:26:49.008 "num_base_bdevs_discovered": 4, 00:26:49.008 "num_base_bdevs_operational": 4, 00:26:49.008 "base_bdevs_list": [ 00:26:49.008 { 00:26:49.008 "name": "BaseBdev1", 00:26:49.008 "uuid": "ee1453a1-62b7-4d4c-a088-7d7984d02254", 00:26:49.008 "is_configured": true, 00:26:49.008 "data_offset": 0, 00:26:49.008 "data_size": 65536 00:26:49.008 }, 00:26:49.008 { 00:26:49.008 "name": "BaseBdev2", 00:26:49.008 "uuid": "16631322-cd24-4f18-b5d1-ed1180f83a70", 00:26:49.008 "is_configured": true, 00:26:49.008 "data_offset": 0, 00:26:49.008 "data_size": 65536 00:26:49.008 }, 00:26:49.008 { 00:26:49.008 "name": "BaseBdev3", 00:26:49.008 "uuid": "c4187f07-3810-47db-b532-b187e51b3602", 00:26:49.008 "is_configured": true, 00:26:49.008 "data_offset": 0, 00:26:49.008 "data_size": 65536 00:26:49.008 }, 00:26:49.008 { 00:26:49.008 "name": "BaseBdev4", 00:26:49.008 "uuid": "aec4fd3d-0bac-4af6-aa20-6299bcb4eed2", 00:26:49.008 "is_configured": true, 00:26:49.008 "data_offset": 0, 00:26:49.008 "data_size": 65536 00:26:49.008 } 00:26:49.008 ] 00:26:49.008 } 00:26:49.008 } 00:26:49.008 }' 00:26:49.008 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:49.008 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:49.008 BaseBdev2 00:26:49.008 BaseBdev3 00:26:49.008 BaseBdev4' 00:26:49.008 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:49.008 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:49.008 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:49.267 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:49.267 "name": "BaseBdev1", 00:26:49.267 "aliases": [ 00:26:49.267 "ee1453a1-62b7-4d4c-a088-7d7984d02254" 00:26:49.267 ], 00:26:49.267 "product_name": "Malloc disk", 00:26:49.267 "block_size": 512, 00:26:49.267 "num_blocks": 65536, 00:26:49.267 "uuid": "ee1453a1-62b7-4d4c-a088-7d7984d02254", 00:26:49.267 "assigned_rate_limits": { 00:26:49.267 "rw_ios_per_sec": 0, 00:26:49.267 "rw_mbytes_per_sec": 0, 00:26:49.267 "r_mbytes_per_sec": 0, 00:26:49.267 "w_mbytes_per_sec": 0 00:26:49.267 }, 00:26:49.267 "claimed": true, 00:26:49.267 "claim_type": "exclusive_write", 00:26:49.267 "zoned": false, 00:26:49.267 "supported_io_types": { 00:26:49.267 "read": true, 00:26:49.267 "write": true, 00:26:49.267 "unmap": true, 00:26:49.267 "flush": true, 00:26:49.267 "reset": true, 00:26:49.267 "nvme_admin": false, 00:26:49.267 "nvme_io": false, 00:26:49.267 "nvme_io_md": false, 00:26:49.267 "write_zeroes": true, 00:26:49.267 "zcopy": true, 00:26:49.267 "get_zone_info": false, 00:26:49.267 "zone_management": false, 00:26:49.267 "zone_append": false, 00:26:49.267 "compare": false, 00:26:49.267 "compare_and_write": false, 00:26:49.267 "abort": true, 00:26:49.267 "seek_hole": false, 00:26:49.267 "seek_data": false, 00:26:49.267 "copy": true, 00:26:49.267 "nvme_iov_md": false 00:26:49.267 }, 00:26:49.267 "memory_domains": [ 00:26:49.267 { 00:26:49.267 "dma_device_id": "system", 00:26:49.267 "dma_device_type": 1 00:26:49.267 }, 00:26:49.267 { 00:26:49.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.267 "dma_device_type": 2 00:26:49.267 } 00:26:49.267 ], 00:26:49.267 "driver_specific": {} 00:26:49.267 }' 00:26:49.267 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:49.267 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:49.525 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:49.525 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:49.525 00:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:49.525 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:49.525 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:49.525 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:49.525 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:49.525 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:49.525 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:49.783 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:49.783 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:49.783 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:49.783 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:50.042 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:50.042 "name": "BaseBdev2", 00:26:50.042 "aliases": [ 00:26:50.042 "16631322-cd24-4f18-b5d1-ed1180f83a70" 00:26:50.042 ], 00:26:50.042 "product_name": "Malloc disk", 00:26:50.042 "block_size": 512, 00:26:50.042 "num_blocks": 65536, 00:26:50.042 "uuid": "16631322-cd24-4f18-b5d1-ed1180f83a70", 00:26:50.042 "assigned_rate_limits": { 00:26:50.042 "rw_ios_per_sec": 0, 00:26:50.042 "rw_mbytes_per_sec": 0, 00:26:50.042 "r_mbytes_per_sec": 0, 00:26:50.042 "w_mbytes_per_sec": 0 00:26:50.042 }, 00:26:50.042 "claimed": true, 00:26:50.042 "claim_type": "exclusive_write", 00:26:50.042 "zoned": false, 00:26:50.042 "supported_io_types": { 00:26:50.042 "read": true, 00:26:50.042 "write": true, 00:26:50.042 "unmap": true, 00:26:50.042 "flush": true, 00:26:50.042 "reset": true, 00:26:50.042 "nvme_admin": false, 00:26:50.042 "nvme_io": false, 00:26:50.042 "nvme_io_md": false, 00:26:50.042 "write_zeroes": true, 00:26:50.042 "zcopy": true, 00:26:50.042 "get_zone_info": false, 00:26:50.042 "zone_management": false, 00:26:50.042 "zone_append": false, 00:26:50.042 "compare": false, 00:26:50.042 "compare_and_write": false, 00:26:50.042 "abort": true, 00:26:50.042 "seek_hole": false, 00:26:50.042 "seek_data": false, 00:26:50.042 "copy": true, 00:26:50.042 "nvme_iov_md": false 00:26:50.042 }, 00:26:50.042 "memory_domains": [ 00:26:50.042 { 00:26:50.042 "dma_device_id": "system", 00:26:50.042 "dma_device_type": 1 00:26:50.042 }, 00:26:50.042 { 00:26:50.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:50.042 "dma_device_type": 2 00:26:50.042 } 00:26:50.042 ], 00:26:50.042 "driver_specific": {} 00:26:50.042 }' 00:26:50.042 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:50.042 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:50.042 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:50.042 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:50.042 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:50.042 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:50.042 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:50.042 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:50.301 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:50.301 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:50.301 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:50.301 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:50.301 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:50.301 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:50.301 00:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:50.611 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:50.611 "name": "BaseBdev3", 00:26:50.611 "aliases": [ 00:26:50.611 "c4187f07-3810-47db-b532-b187e51b3602" 00:26:50.611 ], 00:26:50.611 "product_name": "Malloc disk", 00:26:50.611 "block_size": 512, 00:26:50.611 "num_blocks": 65536, 00:26:50.611 "uuid": "c4187f07-3810-47db-b532-b187e51b3602", 00:26:50.611 "assigned_rate_limits": { 00:26:50.611 "rw_ios_per_sec": 0, 00:26:50.611 "rw_mbytes_per_sec": 0, 00:26:50.611 "r_mbytes_per_sec": 0, 00:26:50.611 "w_mbytes_per_sec": 0 00:26:50.611 }, 00:26:50.611 "claimed": true, 00:26:50.611 "claim_type": "exclusive_write", 00:26:50.611 "zoned": false, 00:26:50.611 "supported_io_types": { 00:26:50.611 "read": true, 00:26:50.611 "write": true, 00:26:50.611 "unmap": true, 00:26:50.611 "flush": true, 00:26:50.611 "reset": true, 00:26:50.611 "nvme_admin": false, 00:26:50.611 "nvme_io": false, 00:26:50.611 "nvme_io_md": false, 00:26:50.611 "write_zeroes": true, 00:26:50.611 "zcopy": true, 00:26:50.611 "get_zone_info": false, 00:26:50.611 "zone_management": false, 00:26:50.611 "zone_append": false, 00:26:50.611 "compare": false, 00:26:50.611 "compare_and_write": false, 00:26:50.611 "abort": true, 00:26:50.611 "seek_hole": false, 00:26:50.611 "seek_data": false, 00:26:50.611 "copy": true, 00:26:50.611 "nvme_iov_md": false 00:26:50.611 }, 00:26:50.611 "memory_domains": [ 00:26:50.611 { 00:26:50.611 "dma_device_id": "system", 00:26:50.611 "dma_device_type": 1 00:26:50.611 }, 00:26:50.611 { 00:26:50.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:50.611 "dma_device_type": 2 00:26:50.611 } 00:26:50.611 ], 00:26:50.611 "driver_specific": {} 00:26:50.611 }' 00:26:50.611 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:50.611 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:50.611 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:50.611 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:50.611 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:50.611 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:50.611 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:50.611 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:50.879 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:50.879 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:50.879 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:50.879 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:50.879 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:50.879 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:50.879 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:51.138 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:51.138 "name": "BaseBdev4", 00:26:51.138 "aliases": [ 00:26:51.138 "aec4fd3d-0bac-4af6-aa20-6299bcb4eed2" 00:26:51.138 ], 00:26:51.138 "product_name": "Malloc disk", 00:26:51.138 "block_size": 512, 00:26:51.138 "num_blocks": 65536, 00:26:51.138 "uuid": "aec4fd3d-0bac-4af6-aa20-6299bcb4eed2", 00:26:51.138 "assigned_rate_limits": { 00:26:51.138 "rw_ios_per_sec": 0, 00:26:51.138 "rw_mbytes_per_sec": 0, 00:26:51.138 "r_mbytes_per_sec": 0, 00:26:51.138 "w_mbytes_per_sec": 0 00:26:51.138 }, 00:26:51.138 "claimed": true, 00:26:51.138 "claim_type": "exclusive_write", 00:26:51.138 "zoned": false, 00:26:51.138 "supported_io_types": { 00:26:51.138 "read": true, 00:26:51.138 "write": true, 00:26:51.138 "unmap": true, 00:26:51.138 "flush": true, 00:26:51.138 "reset": true, 00:26:51.138 "nvme_admin": false, 00:26:51.138 "nvme_io": false, 00:26:51.138 "nvme_io_md": false, 00:26:51.138 "write_zeroes": true, 00:26:51.138 "zcopy": true, 00:26:51.138 "get_zone_info": false, 00:26:51.138 "zone_management": false, 00:26:51.138 "zone_append": false, 00:26:51.138 "compare": false, 00:26:51.138 "compare_and_write": false, 00:26:51.138 "abort": true, 00:26:51.138 "seek_hole": false, 00:26:51.138 "seek_data": false, 00:26:51.138 "copy": true, 00:26:51.138 "nvme_iov_md": false 00:26:51.138 }, 00:26:51.138 "memory_domains": [ 00:26:51.138 { 00:26:51.138 "dma_device_id": "system", 00:26:51.138 "dma_device_type": 1 00:26:51.138 }, 00:26:51.138 { 00:26:51.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.138 "dma_device_type": 2 00:26:51.138 } 00:26:51.138 ], 00:26:51.138 "driver_specific": {} 00:26:51.138 }' 00:26:51.138 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:51.138 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:51.138 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:51.138 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:51.138 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:51.138 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:51.138 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:51.138 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:51.398 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:51.398 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:51.398 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:51.398 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:51.398 00:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:51.657 [2024-07-25 00:53:14.167006] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:51.657 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:51.657 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:51.657 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:51.657 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:51.657 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:51.657 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:51.657 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:51.657 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:51.657 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.658 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.658 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:51.658 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.658 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.658 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.658 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.658 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.658 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.955 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.955 "name": "Existed_Raid", 00:26:51.955 "uuid": "d5c45303-c10d-42c1-98f9-06ad6533a2ad", 00:26:51.955 "strip_size_kb": 0, 00:26:51.955 "state": "online", 00:26:51.955 "raid_level": "raid1", 00:26:51.955 "superblock": false, 00:26:51.955 "num_base_bdevs": 4, 00:26:51.955 "num_base_bdevs_discovered": 3, 00:26:51.955 "num_base_bdevs_operational": 3, 00:26:51.955 "base_bdevs_list": [ 00:26:51.955 { 00:26:51.955 "name": null, 00:26:51.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.955 "is_configured": false, 00:26:51.955 "data_offset": 0, 00:26:51.955 "data_size": 65536 00:26:51.955 }, 00:26:51.955 { 00:26:51.955 "name": "BaseBdev2", 00:26:51.955 "uuid": "16631322-cd24-4f18-b5d1-ed1180f83a70", 00:26:51.955 "is_configured": true, 00:26:51.955 "data_offset": 0, 00:26:51.955 "data_size": 65536 00:26:51.955 }, 00:26:51.955 { 00:26:51.955 "name": "BaseBdev3", 00:26:51.955 "uuid": "c4187f07-3810-47db-b532-b187e51b3602", 00:26:51.955 "is_configured": true, 00:26:51.955 "data_offset": 0, 00:26:51.955 "data_size": 65536 00:26:51.955 }, 00:26:51.955 { 00:26:51.955 "name": "BaseBdev4", 00:26:51.955 "uuid": "aec4fd3d-0bac-4af6-aa20-6299bcb4eed2", 00:26:51.955 "is_configured": true, 00:26:51.955 "data_offset": 0, 00:26:51.955 "data_size": 65536 00:26:51.955 } 00:26:51.955 ] 00:26:51.955 }' 00:26:51.955 00:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.955 00:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.538 00:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:52.538 00:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:52.538 00:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.538 00:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:52.797 00:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:52.797 00:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:52.797 00:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:53.055 [2024-07-25 00:53:15.691282] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:53.314 00:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:53.314 00:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:53.314 00:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:53.314 00:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.573 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:53.573 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:53.573 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:53.831 [2024-07-25 00:53:16.242897] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:53.831 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:53.831 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:53.831 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.831 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:54.090 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:54.090 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:54.090 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:54.349 [2024-07-25 00:53:16.795716] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:54.349 [2024-07-25 00:53:16.796062] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:54.349 [2024-07-25 00:53:16.904647] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:54.349 [2024-07-25 00:53:16.904928] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:54.349 [2024-07-25 00:53:16.905049] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:26:54.349 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:54.349 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:54.349 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.349 00:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:54.607 00:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:54.607 00:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:54.607 00:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:54.607 00:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:54.607 00:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:54.607 00:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:54.866 BaseBdev2 00:26:54.866 00:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:54.866 00:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:54.866 00:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:54.866 00:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:54.866 00:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:54.866 00:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:54.866 00:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:55.125 00:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:55.125 [ 00:26:55.125 { 00:26:55.125 "name": "BaseBdev2", 00:26:55.125 "aliases": [ 00:26:55.125 "ca3192dc-82a4-4139-8380-9c60b5ee50d4" 00:26:55.125 ], 00:26:55.125 "product_name": "Malloc disk", 00:26:55.125 "block_size": 512, 00:26:55.125 "num_blocks": 65536, 00:26:55.125 "uuid": "ca3192dc-82a4-4139-8380-9c60b5ee50d4", 00:26:55.125 "assigned_rate_limits": { 00:26:55.125 "rw_ios_per_sec": 0, 00:26:55.125 "rw_mbytes_per_sec": 0, 00:26:55.125 "r_mbytes_per_sec": 0, 00:26:55.125 "w_mbytes_per_sec": 0 00:26:55.125 }, 00:26:55.125 "claimed": false, 00:26:55.125 "zoned": false, 00:26:55.125 "supported_io_types": { 00:26:55.125 "read": true, 00:26:55.125 "write": true, 00:26:55.125 "unmap": true, 00:26:55.125 "flush": true, 00:26:55.125 "reset": true, 00:26:55.125 "nvme_admin": false, 00:26:55.125 "nvme_io": false, 00:26:55.125 "nvme_io_md": false, 00:26:55.125 "write_zeroes": true, 00:26:55.125 "zcopy": true, 00:26:55.125 "get_zone_info": false, 00:26:55.125 "zone_management": false, 00:26:55.125 "zone_append": false, 00:26:55.125 "compare": false, 00:26:55.125 "compare_and_write": false, 00:26:55.125 "abort": true, 00:26:55.125 "seek_hole": false, 00:26:55.125 "seek_data": false, 00:26:55.125 "copy": true, 00:26:55.125 "nvme_iov_md": false 00:26:55.125 }, 00:26:55.125 "memory_domains": [ 00:26:55.125 { 00:26:55.125 "dma_device_id": "system", 00:26:55.125 "dma_device_type": 1 00:26:55.125 }, 00:26:55.125 { 00:26:55.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.125 "dma_device_type": 2 00:26:55.125 } 00:26:55.125 ], 00:26:55.125 "driver_specific": {} 00:26:55.125 } 00:26:55.125 ] 00:26:55.125 00:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:55.125 00:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:55.125 00:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:55.385 00:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:55.385 BaseBdev3 00:26:55.385 00:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:55.385 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:55.385 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:55.385 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:55.385 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:55.385 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:55.385 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:55.645 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:55.904 [ 00:26:55.904 { 00:26:55.904 "name": "BaseBdev3", 00:26:55.904 "aliases": [ 00:26:55.904 "fcdef9ee-4526-4b48-acde-d9e108dd7605" 00:26:55.904 ], 00:26:55.904 "product_name": "Malloc disk", 00:26:55.904 "block_size": 512, 00:26:55.904 "num_blocks": 65536, 00:26:55.904 "uuid": "fcdef9ee-4526-4b48-acde-d9e108dd7605", 00:26:55.904 "assigned_rate_limits": { 00:26:55.904 "rw_ios_per_sec": 0, 00:26:55.904 "rw_mbytes_per_sec": 0, 00:26:55.904 "r_mbytes_per_sec": 0, 00:26:55.904 "w_mbytes_per_sec": 0 00:26:55.904 }, 00:26:55.904 "claimed": false, 00:26:55.904 "zoned": false, 00:26:55.904 "supported_io_types": { 00:26:55.904 "read": true, 00:26:55.904 "write": true, 00:26:55.904 "unmap": true, 00:26:55.904 "flush": true, 00:26:55.904 "reset": true, 00:26:55.904 "nvme_admin": false, 00:26:55.904 "nvme_io": false, 00:26:55.904 "nvme_io_md": false, 00:26:55.904 "write_zeroes": true, 00:26:55.904 "zcopy": true, 00:26:55.904 "get_zone_info": false, 00:26:55.905 "zone_management": false, 00:26:55.905 "zone_append": false, 00:26:55.905 "compare": false, 00:26:55.905 "compare_and_write": false, 00:26:55.905 "abort": true, 00:26:55.905 "seek_hole": false, 00:26:55.905 "seek_data": false, 00:26:55.905 "copy": true, 00:26:55.905 "nvme_iov_md": false 00:26:55.905 }, 00:26:55.905 "memory_domains": [ 00:26:55.905 { 00:26:55.905 "dma_device_id": "system", 00:26:55.905 "dma_device_type": 1 00:26:55.905 }, 00:26:55.905 { 00:26:55.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.905 "dma_device_type": 2 00:26:55.905 } 00:26:55.905 ], 00:26:55.905 "driver_specific": {} 00:26:55.905 } 00:26:55.905 ] 00:26:55.905 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:55.905 00:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:55.905 00:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:55.905 00:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:56.165 BaseBdev4 00:26:56.165 00:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:56.165 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:56.165 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:56.165 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:56.165 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:56.165 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:56.165 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:56.425 00:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:56.425 [ 00:26:56.425 { 00:26:56.425 "name": "BaseBdev4", 00:26:56.425 "aliases": [ 00:26:56.425 "58b0ef31-6700-4565-b85b-5f5dc15f659e" 00:26:56.425 ], 00:26:56.425 "product_name": "Malloc disk", 00:26:56.425 "block_size": 512, 00:26:56.425 "num_blocks": 65536, 00:26:56.425 "uuid": "58b0ef31-6700-4565-b85b-5f5dc15f659e", 00:26:56.425 "assigned_rate_limits": { 00:26:56.425 "rw_ios_per_sec": 0, 00:26:56.425 "rw_mbytes_per_sec": 0, 00:26:56.425 "r_mbytes_per_sec": 0, 00:26:56.425 "w_mbytes_per_sec": 0 00:26:56.425 }, 00:26:56.425 "claimed": false, 00:26:56.425 "zoned": false, 00:26:56.425 "supported_io_types": { 00:26:56.425 "read": true, 00:26:56.425 "write": true, 00:26:56.425 "unmap": true, 00:26:56.425 "flush": true, 00:26:56.425 "reset": true, 00:26:56.425 "nvme_admin": false, 00:26:56.425 "nvme_io": false, 00:26:56.425 "nvme_io_md": false, 00:26:56.425 "write_zeroes": true, 00:26:56.425 "zcopy": true, 00:26:56.425 "get_zone_info": false, 00:26:56.425 "zone_management": false, 00:26:56.425 "zone_append": false, 00:26:56.425 "compare": false, 00:26:56.425 "compare_and_write": false, 00:26:56.425 "abort": true, 00:26:56.425 "seek_hole": false, 00:26:56.425 "seek_data": false, 00:26:56.425 "copy": true, 00:26:56.425 "nvme_iov_md": false 00:26:56.425 }, 00:26:56.425 "memory_domains": [ 00:26:56.425 { 00:26:56.425 "dma_device_id": "system", 00:26:56.425 "dma_device_type": 1 00:26:56.425 }, 00:26:56.425 { 00:26:56.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.425 "dma_device_type": 2 00:26:56.425 } 00:26:56.425 ], 00:26:56.425 "driver_specific": {} 00:26:56.425 } 00:26:56.425 ] 00:26:56.425 00:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:56.425 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:56.425 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:56.425 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:56.685 [2024-07-25 00:53:19.200374] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:56.685 [2024-07-25 00:53:19.200648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:56.685 [2024-07-25 00:53:19.200757] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:56.685 [2024-07-25 00:53:19.202920] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:56.685 [2024-07-25 00:53:19.203096] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.685 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:56.945 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:56.945 "name": "Existed_Raid", 00:26:56.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.945 "strip_size_kb": 0, 00:26:56.945 "state": "configuring", 00:26:56.945 "raid_level": "raid1", 00:26:56.945 "superblock": false, 00:26:56.945 "num_base_bdevs": 4, 00:26:56.945 "num_base_bdevs_discovered": 3, 00:26:56.945 "num_base_bdevs_operational": 4, 00:26:56.945 "base_bdevs_list": [ 00:26:56.945 { 00:26:56.945 "name": "BaseBdev1", 00:26:56.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.945 "is_configured": false, 00:26:56.945 "data_offset": 0, 00:26:56.945 "data_size": 0 00:26:56.945 }, 00:26:56.945 { 00:26:56.945 "name": "BaseBdev2", 00:26:56.945 "uuid": "ca3192dc-82a4-4139-8380-9c60b5ee50d4", 00:26:56.945 "is_configured": true, 00:26:56.945 "data_offset": 0, 00:26:56.945 "data_size": 65536 00:26:56.945 }, 00:26:56.945 { 00:26:56.945 "name": "BaseBdev3", 00:26:56.945 "uuid": "fcdef9ee-4526-4b48-acde-d9e108dd7605", 00:26:56.945 "is_configured": true, 00:26:56.945 "data_offset": 0, 00:26:56.945 "data_size": 65536 00:26:56.945 }, 00:26:56.945 { 00:26:56.945 "name": "BaseBdev4", 00:26:56.945 "uuid": "58b0ef31-6700-4565-b85b-5f5dc15f659e", 00:26:56.945 "is_configured": true, 00:26:56.945 "data_offset": 0, 00:26:56.945 "data_size": 65536 00:26:56.945 } 00:26:56.945 ] 00:26:56.945 }' 00:26:56.945 00:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:56.945 00:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.513 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:57.772 [2024-07-25 00:53:20.228527] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.772 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:58.031 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:58.031 "name": "Existed_Raid", 00:26:58.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.031 "strip_size_kb": 0, 00:26:58.031 "state": "configuring", 00:26:58.031 "raid_level": "raid1", 00:26:58.031 "superblock": false, 00:26:58.031 "num_base_bdevs": 4, 00:26:58.031 "num_base_bdevs_discovered": 2, 00:26:58.031 "num_base_bdevs_operational": 4, 00:26:58.031 "base_bdevs_list": [ 00:26:58.031 { 00:26:58.031 "name": "BaseBdev1", 00:26:58.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.031 "is_configured": false, 00:26:58.031 "data_offset": 0, 00:26:58.031 "data_size": 0 00:26:58.031 }, 00:26:58.031 { 00:26:58.031 "name": null, 00:26:58.031 "uuid": "ca3192dc-82a4-4139-8380-9c60b5ee50d4", 00:26:58.031 "is_configured": false, 00:26:58.031 "data_offset": 0, 00:26:58.031 "data_size": 65536 00:26:58.031 }, 00:26:58.031 { 00:26:58.031 "name": "BaseBdev3", 00:26:58.031 "uuid": "fcdef9ee-4526-4b48-acde-d9e108dd7605", 00:26:58.031 "is_configured": true, 00:26:58.031 "data_offset": 0, 00:26:58.031 "data_size": 65536 00:26:58.031 }, 00:26:58.031 { 00:26:58.031 "name": "BaseBdev4", 00:26:58.031 "uuid": "58b0ef31-6700-4565-b85b-5f5dc15f659e", 00:26:58.031 "is_configured": true, 00:26:58.031 "data_offset": 0, 00:26:58.031 "data_size": 65536 00:26:58.031 } 00:26:58.031 ] 00:26:58.031 }' 00:26:58.031 00:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:58.031 00:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.599 00:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.599 00:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:58.858 00:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:58.858 00:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:59.117 [2024-07-25 00:53:21.593857] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:59.117 BaseBdev1 00:26:59.117 00:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:59.117 00:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:59.117 00:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:59.117 00:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:59.117 00:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:59.117 00:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:59.117 00:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:59.376 00:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:59.635 [ 00:26:59.635 { 00:26:59.635 "name": "BaseBdev1", 00:26:59.635 "aliases": [ 00:26:59.635 "27af3b22-8b17-4a8b-9400-16d38b14c133" 00:26:59.635 ], 00:26:59.635 "product_name": "Malloc disk", 00:26:59.635 "block_size": 512, 00:26:59.635 "num_blocks": 65536, 00:26:59.635 "uuid": "27af3b22-8b17-4a8b-9400-16d38b14c133", 00:26:59.635 "assigned_rate_limits": { 00:26:59.635 "rw_ios_per_sec": 0, 00:26:59.635 "rw_mbytes_per_sec": 0, 00:26:59.635 "r_mbytes_per_sec": 0, 00:26:59.635 "w_mbytes_per_sec": 0 00:26:59.635 }, 00:26:59.635 "claimed": true, 00:26:59.635 "claim_type": "exclusive_write", 00:26:59.635 "zoned": false, 00:26:59.635 "supported_io_types": { 00:26:59.635 "read": true, 00:26:59.635 "write": true, 00:26:59.635 "unmap": true, 00:26:59.635 "flush": true, 00:26:59.635 "reset": true, 00:26:59.635 "nvme_admin": false, 00:26:59.635 "nvme_io": false, 00:26:59.635 "nvme_io_md": false, 00:26:59.635 "write_zeroes": true, 00:26:59.635 "zcopy": true, 00:26:59.635 "get_zone_info": false, 00:26:59.635 "zone_management": false, 00:26:59.635 "zone_append": false, 00:26:59.635 "compare": false, 00:26:59.635 "compare_and_write": false, 00:26:59.635 "abort": true, 00:26:59.635 "seek_hole": false, 00:26:59.635 "seek_data": false, 00:26:59.635 "copy": true, 00:26:59.635 "nvme_iov_md": false 00:26:59.635 }, 00:26:59.635 "memory_domains": [ 00:26:59.635 { 00:26:59.635 "dma_device_id": "system", 00:26:59.635 "dma_device_type": 1 00:26:59.635 }, 00:26:59.635 { 00:26:59.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.635 "dma_device_type": 2 00:26:59.635 } 00:26:59.635 ], 00:26:59.635 "driver_specific": {} 00:26:59.635 } 00:26:59.635 ] 00:26:59.635 00:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:59.635 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:59.635 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:59.635 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:59.635 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.636 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.636 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:59.636 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.636 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.636 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.636 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.636 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.636 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:59.636 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.636 "name": "Existed_Raid", 00:26:59.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.636 "strip_size_kb": 0, 00:26:59.636 "state": "configuring", 00:26:59.636 "raid_level": "raid1", 00:26:59.636 "superblock": false, 00:26:59.636 "num_base_bdevs": 4, 00:26:59.636 "num_base_bdevs_discovered": 3, 00:26:59.636 "num_base_bdevs_operational": 4, 00:26:59.636 "base_bdevs_list": [ 00:26:59.636 { 00:26:59.636 "name": "BaseBdev1", 00:26:59.636 "uuid": "27af3b22-8b17-4a8b-9400-16d38b14c133", 00:26:59.636 "is_configured": true, 00:26:59.636 "data_offset": 0, 00:26:59.636 "data_size": 65536 00:26:59.636 }, 00:26:59.636 { 00:26:59.636 "name": null, 00:26:59.636 "uuid": "ca3192dc-82a4-4139-8380-9c60b5ee50d4", 00:26:59.636 "is_configured": false, 00:26:59.636 "data_offset": 0, 00:26:59.636 "data_size": 65536 00:26:59.636 }, 00:26:59.636 { 00:26:59.636 "name": "BaseBdev3", 00:26:59.636 "uuid": "fcdef9ee-4526-4b48-acde-d9e108dd7605", 00:26:59.636 "is_configured": true, 00:26:59.636 "data_offset": 0, 00:26:59.636 "data_size": 65536 00:26:59.636 }, 00:26:59.636 { 00:26:59.636 "name": "BaseBdev4", 00:26:59.636 "uuid": "58b0ef31-6700-4565-b85b-5f5dc15f659e", 00:26:59.636 "is_configured": true, 00:26:59.636 "data_offset": 0, 00:26:59.636 "data_size": 65536 00:26:59.636 } 00:26:59.636 ] 00:26:59.636 }' 00:26:59.636 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.636 00:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.206 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.206 00:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:00.473 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:27:00.473 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:27:00.731 [2024-07-25 00:53:23.354529] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:00.731 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.990 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:00.990 "name": "Existed_Raid", 00:27:00.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.990 "strip_size_kb": 0, 00:27:00.990 "state": "configuring", 00:27:00.990 "raid_level": "raid1", 00:27:00.990 "superblock": false, 00:27:00.990 "num_base_bdevs": 4, 00:27:00.990 "num_base_bdevs_discovered": 2, 00:27:00.990 "num_base_bdevs_operational": 4, 00:27:00.990 "base_bdevs_list": [ 00:27:00.990 { 00:27:00.990 "name": "BaseBdev1", 00:27:00.990 "uuid": "27af3b22-8b17-4a8b-9400-16d38b14c133", 00:27:00.990 "is_configured": true, 00:27:00.990 "data_offset": 0, 00:27:00.990 "data_size": 65536 00:27:00.990 }, 00:27:00.990 { 00:27:00.990 "name": null, 00:27:00.990 "uuid": "ca3192dc-82a4-4139-8380-9c60b5ee50d4", 00:27:00.990 "is_configured": false, 00:27:00.990 "data_offset": 0, 00:27:00.990 "data_size": 65536 00:27:00.990 }, 00:27:00.990 { 00:27:00.990 "name": null, 00:27:00.990 "uuid": "fcdef9ee-4526-4b48-acde-d9e108dd7605", 00:27:00.990 "is_configured": false, 00:27:00.990 "data_offset": 0, 00:27:00.990 "data_size": 65536 00:27:00.990 }, 00:27:00.990 { 00:27:00.990 "name": "BaseBdev4", 00:27:00.990 "uuid": "58b0ef31-6700-4565-b85b-5f5dc15f659e", 00:27:00.990 "is_configured": true, 00:27:00.990 "data_offset": 0, 00:27:00.990 "data_size": 65536 00:27:00.990 } 00:27:00.990 ] 00:27:00.990 }' 00:27:00.990 00:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:00.990 00:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.558 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.558 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:01.817 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:27:01.817 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:02.075 [2024-07-25 00:53:24.661958] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:02.075 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:02.075 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:02.075 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:02.075 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.075 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.075 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:02.075 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.075 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.076 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.076 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.076 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.076 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:02.334 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.334 "name": "Existed_Raid", 00:27:02.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.334 "strip_size_kb": 0, 00:27:02.334 "state": "configuring", 00:27:02.334 "raid_level": "raid1", 00:27:02.334 "superblock": false, 00:27:02.334 "num_base_bdevs": 4, 00:27:02.334 "num_base_bdevs_discovered": 3, 00:27:02.334 "num_base_bdevs_operational": 4, 00:27:02.334 "base_bdevs_list": [ 00:27:02.334 { 00:27:02.334 "name": "BaseBdev1", 00:27:02.334 "uuid": "27af3b22-8b17-4a8b-9400-16d38b14c133", 00:27:02.334 "is_configured": true, 00:27:02.334 "data_offset": 0, 00:27:02.334 "data_size": 65536 00:27:02.334 }, 00:27:02.334 { 00:27:02.334 "name": null, 00:27:02.334 "uuid": "ca3192dc-82a4-4139-8380-9c60b5ee50d4", 00:27:02.334 "is_configured": false, 00:27:02.334 "data_offset": 0, 00:27:02.334 "data_size": 65536 00:27:02.334 }, 00:27:02.334 { 00:27:02.334 "name": "BaseBdev3", 00:27:02.334 "uuid": "fcdef9ee-4526-4b48-acde-d9e108dd7605", 00:27:02.334 "is_configured": true, 00:27:02.334 "data_offset": 0, 00:27:02.334 "data_size": 65536 00:27:02.334 }, 00:27:02.334 { 00:27:02.334 "name": "BaseBdev4", 00:27:02.334 "uuid": "58b0ef31-6700-4565-b85b-5f5dc15f659e", 00:27:02.334 "is_configured": true, 00:27:02.334 "data_offset": 0, 00:27:02.334 "data_size": 65536 00:27:02.334 } 00:27:02.334 ] 00:27:02.334 }' 00:27:02.334 00:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.334 00:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.899 00:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.899 00:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:03.158 00:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:27:03.158 00:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:03.417 [2024-07-25 00:53:25.918191] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.417 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:03.676 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:03.676 "name": "Existed_Raid", 00:27:03.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.676 "strip_size_kb": 0, 00:27:03.676 "state": "configuring", 00:27:03.676 "raid_level": "raid1", 00:27:03.676 "superblock": false, 00:27:03.676 "num_base_bdevs": 4, 00:27:03.676 "num_base_bdevs_discovered": 2, 00:27:03.676 "num_base_bdevs_operational": 4, 00:27:03.676 "base_bdevs_list": [ 00:27:03.676 { 00:27:03.676 "name": null, 00:27:03.676 "uuid": "27af3b22-8b17-4a8b-9400-16d38b14c133", 00:27:03.676 "is_configured": false, 00:27:03.676 "data_offset": 0, 00:27:03.676 "data_size": 65536 00:27:03.676 }, 00:27:03.676 { 00:27:03.676 "name": null, 00:27:03.676 "uuid": "ca3192dc-82a4-4139-8380-9c60b5ee50d4", 00:27:03.676 "is_configured": false, 00:27:03.676 "data_offset": 0, 00:27:03.676 "data_size": 65536 00:27:03.676 }, 00:27:03.676 { 00:27:03.676 "name": "BaseBdev3", 00:27:03.676 "uuid": "fcdef9ee-4526-4b48-acde-d9e108dd7605", 00:27:03.676 "is_configured": true, 00:27:03.676 "data_offset": 0, 00:27:03.676 "data_size": 65536 00:27:03.676 }, 00:27:03.676 { 00:27:03.676 "name": "BaseBdev4", 00:27:03.676 "uuid": "58b0ef31-6700-4565-b85b-5f5dc15f659e", 00:27:03.676 "is_configured": true, 00:27:03.676 "data_offset": 0, 00:27:03.676 "data_size": 65536 00:27:03.676 } 00:27:03.676 ] 00:27:03.676 }' 00:27:03.676 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:03.676 00:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.242 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.242 00:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:04.501 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:27:04.501 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:04.759 [2024-07-25 00:53:27.258584] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.759 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:05.018 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:05.018 "name": "Existed_Raid", 00:27:05.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.018 "strip_size_kb": 0, 00:27:05.018 "state": "configuring", 00:27:05.018 "raid_level": "raid1", 00:27:05.018 "superblock": false, 00:27:05.018 "num_base_bdevs": 4, 00:27:05.018 "num_base_bdevs_discovered": 3, 00:27:05.018 "num_base_bdevs_operational": 4, 00:27:05.018 "base_bdevs_list": [ 00:27:05.018 { 00:27:05.018 "name": null, 00:27:05.018 "uuid": "27af3b22-8b17-4a8b-9400-16d38b14c133", 00:27:05.018 "is_configured": false, 00:27:05.018 "data_offset": 0, 00:27:05.018 "data_size": 65536 00:27:05.018 }, 00:27:05.018 { 00:27:05.018 "name": "BaseBdev2", 00:27:05.018 "uuid": "ca3192dc-82a4-4139-8380-9c60b5ee50d4", 00:27:05.018 "is_configured": true, 00:27:05.018 "data_offset": 0, 00:27:05.018 "data_size": 65536 00:27:05.018 }, 00:27:05.018 { 00:27:05.018 "name": "BaseBdev3", 00:27:05.018 "uuid": "fcdef9ee-4526-4b48-acde-d9e108dd7605", 00:27:05.018 "is_configured": true, 00:27:05.018 "data_offset": 0, 00:27:05.018 "data_size": 65536 00:27:05.018 }, 00:27:05.018 { 00:27:05.018 "name": "BaseBdev4", 00:27:05.018 "uuid": "58b0ef31-6700-4565-b85b-5f5dc15f659e", 00:27:05.018 "is_configured": true, 00:27:05.018 "data_offset": 0, 00:27:05.018 "data_size": 65536 00:27:05.018 } 00:27:05.018 ] 00:27:05.018 }' 00:27:05.018 00:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:05.018 00:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.586 00:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.586 00:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:05.586 00:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:27:05.586 00:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.586 00:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:05.845 00:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 27af3b22-8b17-4a8b-9400-16d38b14c133 00:27:06.104 [2024-07-25 00:53:28.605745] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:06.104 [2024-07-25 00:53:28.605982] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:27:06.104 [2024-07-25 00:53:28.606024] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:06.104 [2024-07-25 00:53:28.606255] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:06.104 [2024-07-25 00:53:28.606654] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:27:06.104 [2024-07-25 00:53:28.606762] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:27:06.104 [2024-07-25 00:53:28.607100] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:06.104 NewBaseBdev 00:27:06.104 00:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:27:06.104 00:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:27:06.104 00:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:06.104 00:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:27:06.104 00:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:06.104 00:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:06.104 00:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:06.363 00:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:06.622 [ 00:27:06.622 { 00:27:06.622 "name": "NewBaseBdev", 00:27:06.622 "aliases": [ 00:27:06.622 "27af3b22-8b17-4a8b-9400-16d38b14c133" 00:27:06.622 ], 00:27:06.622 "product_name": "Malloc disk", 00:27:06.622 "block_size": 512, 00:27:06.622 "num_blocks": 65536, 00:27:06.622 "uuid": "27af3b22-8b17-4a8b-9400-16d38b14c133", 00:27:06.622 "assigned_rate_limits": { 00:27:06.622 "rw_ios_per_sec": 0, 00:27:06.622 "rw_mbytes_per_sec": 0, 00:27:06.622 "r_mbytes_per_sec": 0, 00:27:06.622 "w_mbytes_per_sec": 0 00:27:06.622 }, 00:27:06.622 "claimed": true, 00:27:06.623 "claim_type": "exclusive_write", 00:27:06.623 "zoned": false, 00:27:06.623 "supported_io_types": { 00:27:06.623 "read": true, 00:27:06.623 "write": true, 00:27:06.623 "unmap": true, 00:27:06.623 "flush": true, 00:27:06.623 "reset": true, 00:27:06.623 "nvme_admin": false, 00:27:06.623 "nvme_io": false, 00:27:06.623 "nvme_io_md": false, 00:27:06.623 "write_zeroes": true, 00:27:06.623 "zcopy": true, 00:27:06.623 "get_zone_info": false, 00:27:06.623 "zone_management": false, 00:27:06.623 "zone_append": false, 00:27:06.623 "compare": false, 00:27:06.623 "compare_and_write": false, 00:27:06.623 "abort": true, 00:27:06.623 "seek_hole": false, 00:27:06.623 "seek_data": false, 00:27:06.623 "copy": true, 00:27:06.623 "nvme_iov_md": false 00:27:06.623 }, 00:27:06.623 "memory_domains": [ 00:27:06.623 { 00:27:06.623 "dma_device_id": "system", 00:27:06.623 "dma_device_type": 1 00:27:06.623 }, 00:27:06.623 { 00:27:06.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.623 "dma_device_type": 2 00:27:06.623 } 00:27:06.623 ], 00:27:06.623 "driver_specific": {} 00:27:06.623 } 00:27:06.623 ] 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:06.623 "name": "Existed_Raid", 00:27:06.623 "uuid": "97a12bc7-11d6-49e3-ad91-982db6b48c33", 00:27:06.623 "strip_size_kb": 0, 00:27:06.623 "state": "online", 00:27:06.623 "raid_level": "raid1", 00:27:06.623 "superblock": false, 00:27:06.623 "num_base_bdevs": 4, 00:27:06.623 "num_base_bdevs_discovered": 4, 00:27:06.623 "num_base_bdevs_operational": 4, 00:27:06.623 "base_bdevs_list": [ 00:27:06.623 { 00:27:06.623 "name": "NewBaseBdev", 00:27:06.623 "uuid": "27af3b22-8b17-4a8b-9400-16d38b14c133", 00:27:06.623 "is_configured": true, 00:27:06.623 "data_offset": 0, 00:27:06.623 "data_size": 65536 00:27:06.623 }, 00:27:06.623 { 00:27:06.623 "name": "BaseBdev2", 00:27:06.623 "uuid": "ca3192dc-82a4-4139-8380-9c60b5ee50d4", 00:27:06.623 "is_configured": true, 00:27:06.623 "data_offset": 0, 00:27:06.623 "data_size": 65536 00:27:06.623 }, 00:27:06.623 { 00:27:06.623 "name": "BaseBdev3", 00:27:06.623 "uuid": "fcdef9ee-4526-4b48-acde-d9e108dd7605", 00:27:06.623 "is_configured": true, 00:27:06.623 "data_offset": 0, 00:27:06.623 "data_size": 65536 00:27:06.623 }, 00:27:06.623 { 00:27:06.623 "name": "BaseBdev4", 00:27:06.623 "uuid": "58b0ef31-6700-4565-b85b-5f5dc15f659e", 00:27:06.623 "is_configured": true, 00:27:06.623 "data_offset": 0, 00:27:06.623 "data_size": 65536 00:27:06.623 } 00:27:06.623 ] 00:27:06.623 }' 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:06.623 00:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.189 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:27:07.189 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:07.189 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:07.189 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:07.189 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:07.189 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:07.190 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:07.190 00:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:07.448 [2024-07-25 00:53:30.018321] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:07.448 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:07.448 "name": "Existed_Raid", 00:27:07.448 "aliases": [ 00:27:07.448 "97a12bc7-11d6-49e3-ad91-982db6b48c33" 00:27:07.448 ], 00:27:07.448 "product_name": "Raid Volume", 00:27:07.448 "block_size": 512, 00:27:07.448 "num_blocks": 65536, 00:27:07.448 "uuid": "97a12bc7-11d6-49e3-ad91-982db6b48c33", 00:27:07.448 "assigned_rate_limits": { 00:27:07.448 "rw_ios_per_sec": 0, 00:27:07.448 "rw_mbytes_per_sec": 0, 00:27:07.448 "r_mbytes_per_sec": 0, 00:27:07.448 "w_mbytes_per_sec": 0 00:27:07.448 }, 00:27:07.448 "claimed": false, 00:27:07.449 "zoned": false, 00:27:07.449 "supported_io_types": { 00:27:07.449 "read": true, 00:27:07.449 "write": true, 00:27:07.449 "unmap": false, 00:27:07.449 "flush": false, 00:27:07.449 "reset": true, 00:27:07.449 "nvme_admin": false, 00:27:07.449 "nvme_io": false, 00:27:07.449 "nvme_io_md": false, 00:27:07.449 "write_zeroes": true, 00:27:07.449 "zcopy": false, 00:27:07.449 "get_zone_info": false, 00:27:07.449 "zone_management": false, 00:27:07.449 "zone_append": false, 00:27:07.449 "compare": false, 00:27:07.449 "compare_and_write": false, 00:27:07.449 "abort": false, 00:27:07.449 "seek_hole": false, 00:27:07.449 "seek_data": false, 00:27:07.449 "copy": false, 00:27:07.449 "nvme_iov_md": false 00:27:07.449 }, 00:27:07.449 "memory_domains": [ 00:27:07.449 { 00:27:07.449 "dma_device_id": "system", 00:27:07.449 "dma_device_type": 1 00:27:07.449 }, 00:27:07.449 { 00:27:07.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.449 "dma_device_type": 2 00:27:07.449 }, 00:27:07.449 { 00:27:07.449 "dma_device_id": "system", 00:27:07.449 "dma_device_type": 1 00:27:07.449 }, 00:27:07.449 { 00:27:07.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.449 "dma_device_type": 2 00:27:07.449 }, 00:27:07.449 { 00:27:07.449 "dma_device_id": "system", 00:27:07.449 "dma_device_type": 1 00:27:07.449 }, 00:27:07.449 { 00:27:07.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.449 "dma_device_type": 2 00:27:07.449 }, 00:27:07.449 { 00:27:07.449 "dma_device_id": "system", 00:27:07.449 "dma_device_type": 1 00:27:07.449 }, 00:27:07.449 { 00:27:07.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.449 "dma_device_type": 2 00:27:07.449 } 00:27:07.449 ], 00:27:07.449 "driver_specific": { 00:27:07.449 "raid": { 00:27:07.449 "uuid": "97a12bc7-11d6-49e3-ad91-982db6b48c33", 00:27:07.449 "strip_size_kb": 0, 00:27:07.449 "state": "online", 00:27:07.449 "raid_level": "raid1", 00:27:07.449 "superblock": false, 00:27:07.449 "num_base_bdevs": 4, 00:27:07.449 "num_base_bdevs_discovered": 4, 00:27:07.449 "num_base_bdevs_operational": 4, 00:27:07.449 "base_bdevs_list": [ 00:27:07.449 { 00:27:07.449 "name": "NewBaseBdev", 00:27:07.449 "uuid": "27af3b22-8b17-4a8b-9400-16d38b14c133", 00:27:07.449 "is_configured": true, 00:27:07.449 "data_offset": 0, 00:27:07.449 "data_size": 65536 00:27:07.449 }, 00:27:07.449 { 00:27:07.449 "name": "BaseBdev2", 00:27:07.449 "uuid": "ca3192dc-82a4-4139-8380-9c60b5ee50d4", 00:27:07.449 "is_configured": true, 00:27:07.449 "data_offset": 0, 00:27:07.449 "data_size": 65536 00:27:07.449 }, 00:27:07.449 { 00:27:07.449 "name": "BaseBdev3", 00:27:07.449 "uuid": "fcdef9ee-4526-4b48-acde-d9e108dd7605", 00:27:07.449 "is_configured": true, 00:27:07.449 "data_offset": 0, 00:27:07.449 "data_size": 65536 00:27:07.449 }, 00:27:07.449 { 00:27:07.449 "name": "BaseBdev4", 00:27:07.449 "uuid": "58b0ef31-6700-4565-b85b-5f5dc15f659e", 00:27:07.449 "is_configured": true, 00:27:07.449 "data_offset": 0, 00:27:07.449 "data_size": 65536 00:27:07.449 } 00:27:07.449 ] 00:27:07.449 } 00:27:07.449 } 00:27:07.449 }' 00:27:07.449 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:07.449 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:27:07.449 BaseBdev2 00:27:07.449 BaseBdev3 00:27:07.449 BaseBdev4' 00:27:07.449 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:07.449 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:27:07.449 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:07.708 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:07.709 "name": "NewBaseBdev", 00:27:07.709 "aliases": [ 00:27:07.709 "27af3b22-8b17-4a8b-9400-16d38b14c133" 00:27:07.709 ], 00:27:07.709 "product_name": "Malloc disk", 00:27:07.709 "block_size": 512, 00:27:07.709 "num_blocks": 65536, 00:27:07.709 "uuid": "27af3b22-8b17-4a8b-9400-16d38b14c133", 00:27:07.709 "assigned_rate_limits": { 00:27:07.709 "rw_ios_per_sec": 0, 00:27:07.709 "rw_mbytes_per_sec": 0, 00:27:07.709 "r_mbytes_per_sec": 0, 00:27:07.709 "w_mbytes_per_sec": 0 00:27:07.709 }, 00:27:07.709 "claimed": true, 00:27:07.709 "claim_type": "exclusive_write", 00:27:07.709 "zoned": false, 00:27:07.709 "supported_io_types": { 00:27:07.709 "read": true, 00:27:07.709 "write": true, 00:27:07.709 "unmap": true, 00:27:07.709 "flush": true, 00:27:07.709 "reset": true, 00:27:07.709 "nvme_admin": false, 00:27:07.709 "nvme_io": false, 00:27:07.709 "nvme_io_md": false, 00:27:07.709 "write_zeroes": true, 00:27:07.709 "zcopy": true, 00:27:07.709 "get_zone_info": false, 00:27:07.709 "zone_management": false, 00:27:07.709 "zone_append": false, 00:27:07.709 "compare": false, 00:27:07.709 "compare_and_write": false, 00:27:07.709 "abort": true, 00:27:07.709 "seek_hole": false, 00:27:07.709 "seek_data": false, 00:27:07.709 "copy": true, 00:27:07.709 "nvme_iov_md": false 00:27:07.709 }, 00:27:07.709 "memory_domains": [ 00:27:07.709 { 00:27:07.709 "dma_device_id": "system", 00:27:07.709 "dma_device_type": 1 00:27:07.709 }, 00:27:07.709 { 00:27:07.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.709 "dma_device_type": 2 00:27:07.709 } 00:27:07.709 ], 00:27:07.709 "driver_specific": {} 00:27:07.709 }' 00:27:07.709 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:07.709 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:07.709 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:07.709 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:07.968 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:07.968 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:07.968 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:07.968 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:07.968 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:07.968 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:07.968 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:08.227 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:08.227 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:08.227 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:08.227 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:08.487 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:08.487 "name": "BaseBdev2", 00:27:08.487 "aliases": [ 00:27:08.487 "ca3192dc-82a4-4139-8380-9c60b5ee50d4" 00:27:08.487 ], 00:27:08.487 "product_name": "Malloc disk", 00:27:08.487 "block_size": 512, 00:27:08.487 "num_blocks": 65536, 00:27:08.487 "uuid": "ca3192dc-82a4-4139-8380-9c60b5ee50d4", 00:27:08.487 "assigned_rate_limits": { 00:27:08.487 "rw_ios_per_sec": 0, 00:27:08.487 "rw_mbytes_per_sec": 0, 00:27:08.487 "r_mbytes_per_sec": 0, 00:27:08.487 "w_mbytes_per_sec": 0 00:27:08.487 }, 00:27:08.487 "claimed": true, 00:27:08.487 "claim_type": "exclusive_write", 00:27:08.487 "zoned": false, 00:27:08.487 "supported_io_types": { 00:27:08.487 "read": true, 00:27:08.487 "write": true, 00:27:08.487 "unmap": true, 00:27:08.487 "flush": true, 00:27:08.487 "reset": true, 00:27:08.487 "nvme_admin": false, 00:27:08.487 "nvme_io": false, 00:27:08.487 "nvme_io_md": false, 00:27:08.487 "write_zeroes": true, 00:27:08.487 "zcopy": true, 00:27:08.487 "get_zone_info": false, 00:27:08.487 "zone_management": false, 00:27:08.487 "zone_append": false, 00:27:08.487 "compare": false, 00:27:08.487 "compare_and_write": false, 00:27:08.487 "abort": true, 00:27:08.487 "seek_hole": false, 00:27:08.487 "seek_data": false, 00:27:08.487 "copy": true, 00:27:08.487 "nvme_iov_md": false 00:27:08.487 }, 00:27:08.487 "memory_domains": [ 00:27:08.487 { 00:27:08.487 "dma_device_id": "system", 00:27:08.487 "dma_device_type": 1 00:27:08.487 }, 00:27:08.488 { 00:27:08.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:08.488 "dma_device_type": 2 00:27:08.488 } 00:27:08.488 ], 00:27:08.488 "driver_specific": {} 00:27:08.488 }' 00:27:08.488 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:08.488 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:08.488 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:08.488 00:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:08.488 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:08.488 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:08.488 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:08.488 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:08.488 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:08.747 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:08.747 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:08.747 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:08.747 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:08.747 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:08.747 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:09.006 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:09.006 "name": "BaseBdev3", 00:27:09.006 "aliases": [ 00:27:09.006 "fcdef9ee-4526-4b48-acde-d9e108dd7605" 00:27:09.006 ], 00:27:09.006 "product_name": "Malloc disk", 00:27:09.006 "block_size": 512, 00:27:09.006 "num_blocks": 65536, 00:27:09.006 "uuid": "fcdef9ee-4526-4b48-acde-d9e108dd7605", 00:27:09.006 "assigned_rate_limits": { 00:27:09.006 "rw_ios_per_sec": 0, 00:27:09.006 "rw_mbytes_per_sec": 0, 00:27:09.006 "r_mbytes_per_sec": 0, 00:27:09.006 "w_mbytes_per_sec": 0 00:27:09.006 }, 00:27:09.006 "claimed": true, 00:27:09.006 "claim_type": "exclusive_write", 00:27:09.006 "zoned": false, 00:27:09.006 "supported_io_types": { 00:27:09.006 "read": true, 00:27:09.006 "write": true, 00:27:09.006 "unmap": true, 00:27:09.006 "flush": true, 00:27:09.006 "reset": true, 00:27:09.006 "nvme_admin": false, 00:27:09.006 "nvme_io": false, 00:27:09.006 "nvme_io_md": false, 00:27:09.006 "write_zeroes": true, 00:27:09.006 "zcopy": true, 00:27:09.006 "get_zone_info": false, 00:27:09.006 "zone_management": false, 00:27:09.006 "zone_append": false, 00:27:09.006 "compare": false, 00:27:09.006 "compare_and_write": false, 00:27:09.006 "abort": true, 00:27:09.006 "seek_hole": false, 00:27:09.006 "seek_data": false, 00:27:09.006 "copy": true, 00:27:09.006 "nvme_iov_md": false 00:27:09.006 }, 00:27:09.006 "memory_domains": [ 00:27:09.006 { 00:27:09.006 "dma_device_id": "system", 00:27:09.006 "dma_device_type": 1 00:27:09.006 }, 00:27:09.006 { 00:27:09.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.006 "dma_device_type": 2 00:27:09.006 } 00:27:09.006 ], 00:27:09.006 "driver_specific": {} 00:27:09.006 }' 00:27:09.006 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.006 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.006 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:09.006 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.265 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.265 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:09.265 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.265 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.265 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:09.265 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.265 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.265 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:09.265 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:09.524 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:27:09.524 00:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:09.784 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:09.784 "name": "BaseBdev4", 00:27:09.784 "aliases": [ 00:27:09.784 "58b0ef31-6700-4565-b85b-5f5dc15f659e" 00:27:09.784 ], 00:27:09.784 "product_name": "Malloc disk", 00:27:09.784 "block_size": 512, 00:27:09.784 "num_blocks": 65536, 00:27:09.784 "uuid": "58b0ef31-6700-4565-b85b-5f5dc15f659e", 00:27:09.784 "assigned_rate_limits": { 00:27:09.784 "rw_ios_per_sec": 0, 00:27:09.784 "rw_mbytes_per_sec": 0, 00:27:09.784 "r_mbytes_per_sec": 0, 00:27:09.784 "w_mbytes_per_sec": 0 00:27:09.784 }, 00:27:09.784 "claimed": true, 00:27:09.784 "claim_type": "exclusive_write", 00:27:09.784 "zoned": false, 00:27:09.784 "supported_io_types": { 00:27:09.784 "read": true, 00:27:09.784 "write": true, 00:27:09.784 "unmap": true, 00:27:09.784 "flush": true, 00:27:09.784 "reset": true, 00:27:09.784 "nvme_admin": false, 00:27:09.784 "nvme_io": false, 00:27:09.784 "nvme_io_md": false, 00:27:09.784 "write_zeroes": true, 00:27:09.784 "zcopy": true, 00:27:09.784 "get_zone_info": false, 00:27:09.784 "zone_management": false, 00:27:09.784 "zone_append": false, 00:27:09.784 "compare": false, 00:27:09.784 "compare_and_write": false, 00:27:09.784 "abort": true, 00:27:09.784 "seek_hole": false, 00:27:09.784 "seek_data": false, 00:27:09.784 "copy": true, 00:27:09.784 "nvme_iov_md": false 00:27:09.784 }, 00:27:09.784 "memory_domains": [ 00:27:09.784 { 00:27:09.784 "dma_device_id": "system", 00:27:09.784 "dma_device_type": 1 00:27:09.784 }, 00:27:09.784 { 00:27:09.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.784 "dma_device_type": 2 00:27:09.784 } 00:27:09.784 ], 00:27:09.784 "driver_specific": {} 00:27:09.784 }' 00:27:09.784 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.784 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.784 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:09.784 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.784 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.784 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:09.784 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.784 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:10.043 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:10.043 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:10.043 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:10.043 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:10.043 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:10.302 [2024-07-25 00:53:32.806976] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:10.302 [2024-07-25 00:53:32.807261] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:10.302 [2024-07-25 00:53:32.807433] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:10.302 [2024-07-25 00:53:32.807822] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:10.302 [2024-07-25 00:53:32.807934] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:27:10.302 00:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 141171 00:27:10.302 00:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 141171 ']' 00:27:10.302 00:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 141171 00:27:10.302 00:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:27:10.302 00:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.302 00:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 141171 00:27:10.302 00:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:10.302 00:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:10.302 00:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 141171' 00:27:10.302 killing process with pid 141171 00:27:10.302 00:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 141171 00:27:10.302 [2024-07-25 00:53:32.858121] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:10.302 00:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 141171 00:27:10.869 [2024-07-25 00:53:33.279526] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:27:12.246 00:27:12.246 real 0m33.137s 00:27:12.246 user 0m59.234s 00:27:12.246 sys 0m5.353s 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.246 ************************************ 00:27:12.246 END TEST raid_state_function_test 00:27:12.246 ************************************ 00:27:12.246 00:53:34 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:27:12.246 00:53:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:12.246 00:53:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.246 00:53:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:12.246 ************************************ 00:27:12.246 START TEST raid_state_function_test_sb 00:27:12.246 ************************************ 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=142266 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 142266' 00:27:12.246 Process raid pid: 142266 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 142266 /var/tmp/spdk-raid.sock 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 142266 ']' 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:12.246 00:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.247 00:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:12.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:12.247 00:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.247 00:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.247 [2024-07-25 00:53:34.838652] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:27:12.247 [2024-07-25 00:53:34.839133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.506 [2024-07-25 00:53:35.025084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.765 [2024-07-25 00:53:35.280717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.093 [2024-07-25 00:53:35.491647] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:13.093 00:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.093 00:53:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:27:13.093 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:13.366 [2024-07-25 00:53:35.969370] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:13.366 [2024-07-25 00:53:35.969710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:13.366 [2024-07-25 00:53:35.969794] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:13.366 [2024-07-25 00:53:35.969849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:13.366 [2024-07-25 00:53:35.969877] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:13.366 [2024-07-25 00:53:35.969953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:13.366 [2024-07-25 00:53:35.969982] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:13.366 [2024-07-25 00:53:35.970023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.366 00:53:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:13.625 00:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.625 "name": "Existed_Raid", 00:27:13.625 "uuid": "4058fc57-7879-488f-aabc-84b904924196", 00:27:13.625 "strip_size_kb": 0, 00:27:13.625 "state": "configuring", 00:27:13.625 "raid_level": "raid1", 00:27:13.625 "superblock": true, 00:27:13.625 "num_base_bdevs": 4, 00:27:13.625 "num_base_bdevs_discovered": 0, 00:27:13.625 "num_base_bdevs_operational": 4, 00:27:13.625 "base_bdevs_list": [ 00:27:13.625 { 00:27:13.625 "name": "BaseBdev1", 00:27:13.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.625 "is_configured": false, 00:27:13.625 "data_offset": 0, 00:27:13.625 "data_size": 0 00:27:13.625 }, 00:27:13.625 { 00:27:13.625 "name": "BaseBdev2", 00:27:13.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.625 "is_configured": false, 00:27:13.625 "data_offset": 0, 00:27:13.625 "data_size": 0 00:27:13.625 }, 00:27:13.625 { 00:27:13.625 "name": "BaseBdev3", 00:27:13.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.625 "is_configured": false, 00:27:13.625 "data_offset": 0, 00:27:13.625 "data_size": 0 00:27:13.625 }, 00:27:13.625 { 00:27:13.625 "name": "BaseBdev4", 00:27:13.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.625 "is_configured": false, 00:27:13.625 "data_offset": 0, 00:27:13.625 "data_size": 0 00:27:13.625 } 00:27:13.625 ] 00:27:13.625 }' 00:27:13.625 00:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.625 00:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:14.193 00:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:14.453 [2024-07-25 00:53:37.013515] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:14.453 [2024-07-25 00:53:37.013782] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:27:14.453 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:14.711 [2024-07-25 00:53:37.205530] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:14.711 [2024-07-25 00:53:37.205818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:14.712 [2024-07-25 00:53:37.205904] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:14.712 [2024-07-25 00:53:37.205985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:14.712 [2024-07-25 00:53:37.206014] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:14.712 [2024-07-25 00:53:37.206104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:14.712 [2024-07-25 00:53:37.206135] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:14.712 [2024-07-25 00:53:37.206175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:14.712 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:14.971 [2024-07-25 00:53:37.429887] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:14.971 BaseBdev1 00:27:14.971 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:14.971 00:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:14.971 00:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:14.971 00:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:14.971 00:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:14.971 00:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:14.971 00:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:15.230 [ 00:27:15.230 { 00:27:15.230 "name": "BaseBdev1", 00:27:15.230 "aliases": [ 00:27:15.230 "5fa2ba5d-8610-4d97-bfe0-3682d4e7fc3c" 00:27:15.230 ], 00:27:15.230 "product_name": "Malloc disk", 00:27:15.230 "block_size": 512, 00:27:15.230 "num_blocks": 65536, 00:27:15.230 "uuid": "5fa2ba5d-8610-4d97-bfe0-3682d4e7fc3c", 00:27:15.230 "assigned_rate_limits": { 00:27:15.230 "rw_ios_per_sec": 0, 00:27:15.230 "rw_mbytes_per_sec": 0, 00:27:15.230 "r_mbytes_per_sec": 0, 00:27:15.230 "w_mbytes_per_sec": 0 00:27:15.230 }, 00:27:15.230 "claimed": true, 00:27:15.230 "claim_type": "exclusive_write", 00:27:15.230 "zoned": false, 00:27:15.230 "supported_io_types": { 00:27:15.230 "read": true, 00:27:15.230 "write": true, 00:27:15.230 "unmap": true, 00:27:15.230 "flush": true, 00:27:15.230 "reset": true, 00:27:15.230 "nvme_admin": false, 00:27:15.230 "nvme_io": false, 00:27:15.230 "nvme_io_md": false, 00:27:15.230 "write_zeroes": true, 00:27:15.230 "zcopy": true, 00:27:15.230 "get_zone_info": false, 00:27:15.230 "zone_management": false, 00:27:15.230 "zone_append": false, 00:27:15.230 "compare": false, 00:27:15.230 "compare_and_write": false, 00:27:15.230 "abort": true, 00:27:15.230 "seek_hole": false, 00:27:15.230 "seek_data": false, 00:27:15.230 "copy": true, 00:27:15.230 "nvme_iov_md": false 00:27:15.230 }, 00:27:15.230 "memory_domains": [ 00:27:15.230 { 00:27:15.230 "dma_device_id": "system", 00:27:15.230 "dma_device_type": 1 00:27:15.230 }, 00:27:15.230 { 00:27:15.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.230 "dma_device_type": 2 00:27:15.230 } 00:27:15.230 ], 00:27:15.230 "driver_specific": {} 00:27:15.230 } 00:27:15.230 ] 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.230 00:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.489 00:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.489 "name": "Existed_Raid", 00:27:15.489 "uuid": "37b30d2e-0f6b-4034-8b38-8a0ab5f3b72b", 00:27:15.489 "strip_size_kb": 0, 00:27:15.489 "state": "configuring", 00:27:15.489 "raid_level": "raid1", 00:27:15.489 "superblock": true, 00:27:15.490 "num_base_bdevs": 4, 00:27:15.490 "num_base_bdevs_discovered": 1, 00:27:15.490 "num_base_bdevs_operational": 4, 00:27:15.490 "base_bdevs_list": [ 00:27:15.490 { 00:27:15.490 "name": "BaseBdev1", 00:27:15.490 "uuid": "5fa2ba5d-8610-4d97-bfe0-3682d4e7fc3c", 00:27:15.490 "is_configured": true, 00:27:15.490 "data_offset": 2048, 00:27:15.490 "data_size": 63488 00:27:15.490 }, 00:27:15.490 { 00:27:15.490 "name": "BaseBdev2", 00:27:15.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.490 "is_configured": false, 00:27:15.490 "data_offset": 0, 00:27:15.490 "data_size": 0 00:27:15.490 }, 00:27:15.490 { 00:27:15.490 "name": "BaseBdev3", 00:27:15.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.490 "is_configured": false, 00:27:15.490 "data_offset": 0, 00:27:15.490 "data_size": 0 00:27:15.490 }, 00:27:15.490 { 00:27:15.490 "name": "BaseBdev4", 00:27:15.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.490 "is_configured": false, 00:27:15.490 "data_offset": 0, 00:27:15.490 "data_size": 0 00:27:15.490 } 00:27:15.490 ] 00:27:15.490 }' 00:27:15.490 00:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.490 00:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:16.059 00:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:16.318 [2024-07-25 00:53:38.874179] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:16.318 [2024-07-25 00:53:38.874468] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:27:16.318 00:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:16.577 [2024-07-25 00:53:39.130289] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:16.577 [2024-07-25 00:53:39.132448] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:16.577 [2024-07-25 00:53:39.132616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:16.577 [2024-07-25 00:53:39.132708] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:16.577 [2024-07-25 00:53:39.132762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:16.577 [2024-07-25 00:53:39.132790] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:16.577 [2024-07-25 00:53:39.132880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.577 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:16.836 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:16.836 "name": "Existed_Raid", 00:27:16.836 "uuid": "07a7ab40-7d20-4fdc-917e-167430f4f1a1", 00:27:16.836 "strip_size_kb": 0, 00:27:16.836 "state": "configuring", 00:27:16.836 "raid_level": "raid1", 00:27:16.836 "superblock": true, 00:27:16.836 "num_base_bdevs": 4, 00:27:16.836 "num_base_bdevs_discovered": 1, 00:27:16.836 "num_base_bdevs_operational": 4, 00:27:16.836 "base_bdevs_list": [ 00:27:16.836 { 00:27:16.836 "name": "BaseBdev1", 00:27:16.836 "uuid": "5fa2ba5d-8610-4d97-bfe0-3682d4e7fc3c", 00:27:16.836 "is_configured": true, 00:27:16.836 "data_offset": 2048, 00:27:16.836 "data_size": 63488 00:27:16.836 }, 00:27:16.836 { 00:27:16.836 "name": "BaseBdev2", 00:27:16.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.836 "is_configured": false, 00:27:16.836 "data_offset": 0, 00:27:16.836 "data_size": 0 00:27:16.836 }, 00:27:16.836 { 00:27:16.836 "name": "BaseBdev3", 00:27:16.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.836 "is_configured": false, 00:27:16.836 "data_offset": 0, 00:27:16.836 "data_size": 0 00:27:16.836 }, 00:27:16.836 { 00:27:16.836 "name": "BaseBdev4", 00:27:16.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.836 "is_configured": false, 00:27:16.836 "data_offset": 0, 00:27:16.836 "data_size": 0 00:27:16.836 } 00:27:16.836 ] 00:27:16.836 }' 00:27:16.836 00:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:16.836 00:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:17.404 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:17.663 [2024-07-25 00:53:40.256938] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:17.663 BaseBdev2 00:27:17.663 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:17.663 00:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:17.663 00:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:17.663 00:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:17.663 00:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:17.663 00:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:17.663 00:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:17.922 00:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:18.181 [ 00:27:18.181 { 00:27:18.181 "name": "BaseBdev2", 00:27:18.181 "aliases": [ 00:27:18.181 "73c5116e-9910-437a-9eed-b436294d14a1" 00:27:18.181 ], 00:27:18.181 "product_name": "Malloc disk", 00:27:18.182 "block_size": 512, 00:27:18.182 "num_blocks": 65536, 00:27:18.182 "uuid": "73c5116e-9910-437a-9eed-b436294d14a1", 00:27:18.182 "assigned_rate_limits": { 00:27:18.182 "rw_ios_per_sec": 0, 00:27:18.182 "rw_mbytes_per_sec": 0, 00:27:18.182 "r_mbytes_per_sec": 0, 00:27:18.182 "w_mbytes_per_sec": 0 00:27:18.182 }, 00:27:18.182 "claimed": true, 00:27:18.182 "claim_type": "exclusive_write", 00:27:18.182 "zoned": false, 00:27:18.182 "supported_io_types": { 00:27:18.182 "read": true, 00:27:18.182 "write": true, 00:27:18.182 "unmap": true, 00:27:18.182 "flush": true, 00:27:18.182 "reset": true, 00:27:18.182 "nvme_admin": false, 00:27:18.182 "nvme_io": false, 00:27:18.182 "nvme_io_md": false, 00:27:18.182 "write_zeroes": true, 00:27:18.182 "zcopy": true, 00:27:18.182 "get_zone_info": false, 00:27:18.182 "zone_management": false, 00:27:18.182 "zone_append": false, 00:27:18.182 "compare": false, 00:27:18.182 "compare_and_write": false, 00:27:18.182 "abort": true, 00:27:18.182 "seek_hole": false, 00:27:18.182 "seek_data": false, 00:27:18.182 "copy": true, 00:27:18.182 "nvme_iov_md": false 00:27:18.182 }, 00:27:18.182 "memory_domains": [ 00:27:18.182 { 00:27:18.182 "dma_device_id": "system", 00:27:18.182 "dma_device_type": 1 00:27:18.182 }, 00:27:18.182 { 00:27:18.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.182 "dma_device_type": 2 00:27:18.182 } 00:27:18.182 ], 00:27:18.182 "driver_specific": {} 00:27:18.182 } 00:27:18.182 ] 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.182 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:18.441 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:18.441 "name": "Existed_Raid", 00:27:18.441 "uuid": "07a7ab40-7d20-4fdc-917e-167430f4f1a1", 00:27:18.441 "strip_size_kb": 0, 00:27:18.441 "state": "configuring", 00:27:18.441 "raid_level": "raid1", 00:27:18.441 "superblock": true, 00:27:18.441 "num_base_bdevs": 4, 00:27:18.441 "num_base_bdevs_discovered": 2, 00:27:18.441 "num_base_bdevs_operational": 4, 00:27:18.441 "base_bdevs_list": [ 00:27:18.441 { 00:27:18.441 "name": "BaseBdev1", 00:27:18.441 "uuid": "5fa2ba5d-8610-4d97-bfe0-3682d4e7fc3c", 00:27:18.441 "is_configured": true, 00:27:18.441 "data_offset": 2048, 00:27:18.441 "data_size": 63488 00:27:18.441 }, 00:27:18.441 { 00:27:18.441 "name": "BaseBdev2", 00:27:18.441 "uuid": "73c5116e-9910-437a-9eed-b436294d14a1", 00:27:18.441 "is_configured": true, 00:27:18.441 "data_offset": 2048, 00:27:18.441 "data_size": 63488 00:27:18.441 }, 00:27:18.441 { 00:27:18.441 "name": "BaseBdev3", 00:27:18.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.441 "is_configured": false, 00:27:18.441 "data_offset": 0, 00:27:18.441 "data_size": 0 00:27:18.441 }, 00:27:18.441 { 00:27:18.441 "name": "BaseBdev4", 00:27:18.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.441 "is_configured": false, 00:27:18.441 "data_offset": 0, 00:27:18.441 "data_size": 0 00:27:18.441 } 00:27:18.441 ] 00:27:18.441 }' 00:27:18.441 00:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:18.441 00:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:19.009 00:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:19.269 [2024-07-25 00:53:41.796733] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:19.269 BaseBdev3 00:27:19.269 00:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:27:19.269 00:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:27:19.269 00:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:19.269 00:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:19.269 00:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:19.269 00:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:19.269 00:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:19.529 00:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:19.530 [ 00:27:19.530 { 00:27:19.530 "name": "BaseBdev3", 00:27:19.530 "aliases": [ 00:27:19.530 "27dd2821-d932-4030-af9b-14a214e82641" 00:27:19.530 ], 00:27:19.530 "product_name": "Malloc disk", 00:27:19.530 "block_size": 512, 00:27:19.530 "num_blocks": 65536, 00:27:19.530 "uuid": "27dd2821-d932-4030-af9b-14a214e82641", 00:27:19.530 "assigned_rate_limits": { 00:27:19.530 "rw_ios_per_sec": 0, 00:27:19.530 "rw_mbytes_per_sec": 0, 00:27:19.530 "r_mbytes_per_sec": 0, 00:27:19.530 "w_mbytes_per_sec": 0 00:27:19.530 }, 00:27:19.530 "claimed": true, 00:27:19.530 "claim_type": "exclusive_write", 00:27:19.530 "zoned": false, 00:27:19.530 "supported_io_types": { 00:27:19.530 "read": true, 00:27:19.530 "write": true, 00:27:19.530 "unmap": true, 00:27:19.530 "flush": true, 00:27:19.530 "reset": true, 00:27:19.530 "nvme_admin": false, 00:27:19.530 "nvme_io": false, 00:27:19.530 "nvme_io_md": false, 00:27:19.530 "write_zeroes": true, 00:27:19.530 "zcopy": true, 00:27:19.530 "get_zone_info": false, 00:27:19.530 "zone_management": false, 00:27:19.530 "zone_append": false, 00:27:19.530 "compare": false, 00:27:19.530 "compare_and_write": false, 00:27:19.530 "abort": true, 00:27:19.530 "seek_hole": false, 00:27:19.530 "seek_data": false, 00:27:19.530 "copy": true, 00:27:19.530 "nvme_iov_md": false 00:27:19.530 }, 00:27:19.530 "memory_domains": [ 00:27:19.530 { 00:27:19.530 "dma_device_id": "system", 00:27:19.530 "dma_device_type": 1 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:19.530 "dma_device_type": 2 00:27:19.530 } 00:27:19.530 ], 00:27:19.530 "driver_specific": {} 00:27:19.530 } 00:27:19.530 ] 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:19.790 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:19.790 "name": "Existed_Raid", 00:27:19.790 "uuid": "07a7ab40-7d20-4fdc-917e-167430f4f1a1", 00:27:19.790 "strip_size_kb": 0, 00:27:19.790 "state": "configuring", 00:27:19.790 "raid_level": "raid1", 00:27:19.790 "superblock": true, 00:27:19.790 "num_base_bdevs": 4, 00:27:19.790 "num_base_bdevs_discovered": 3, 00:27:19.790 "num_base_bdevs_operational": 4, 00:27:19.790 "base_bdevs_list": [ 00:27:19.790 { 00:27:19.790 "name": "BaseBdev1", 00:27:19.790 "uuid": "5fa2ba5d-8610-4d97-bfe0-3682d4e7fc3c", 00:27:19.790 "is_configured": true, 00:27:19.790 "data_offset": 2048, 00:27:19.790 "data_size": 63488 00:27:19.790 }, 00:27:19.790 { 00:27:19.790 "name": "BaseBdev2", 00:27:19.790 "uuid": "73c5116e-9910-437a-9eed-b436294d14a1", 00:27:19.790 "is_configured": true, 00:27:19.790 "data_offset": 2048, 00:27:19.790 "data_size": 63488 00:27:19.790 }, 00:27:19.790 { 00:27:19.790 "name": "BaseBdev3", 00:27:19.790 "uuid": "27dd2821-d932-4030-af9b-14a214e82641", 00:27:19.790 "is_configured": true, 00:27:19.790 "data_offset": 2048, 00:27:19.790 "data_size": 63488 00:27:19.790 }, 00:27:19.790 { 00:27:19.790 "name": "BaseBdev4", 00:27:19.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:19.790 "is_configured": false, 00:27:19.790 "data_offset": 0, 00:27:19.790 "data_size": 0 00:27:19.790 } 00:27:19.790 ] 00:27:19.790 }' 00:27:20.070 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:20.070 00:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:20.330 00:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:20.589 [2024-07-25 00:53:43.209976] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:20.589 [2024-07-25 00:53:43.210543] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:27:20.589 [2024-07-25 00:53:43.210672] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:20.589 [2024-07-25 00:53:43.210861] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:27:20.589 [2024-07-25 00:53:43.211353] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:27:20.589 BaseBdev4 00:27:20.589 [2024-07-25 00:53:43.211490] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:27:20.589 [2024-07-25 00:53:43.211738] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:20.589 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:27:20.589 00:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:27:20.589 00:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:20.589 00:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:20.590 00:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:20.590 00:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:20.590 00:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:20.850 00:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:21.111 [ 00:27:21.111 { 00:27:21.111 "name": "BaseBdev4", 00:27:21.111 "aliases": [ 00:27:21.111 "1fb7e281-e1e7-4557-bfc4-267e8e7d92a9" 00:27:21.111 ], 00:27:21.111 "product_name": "Malloc disk", 00:27:21.111 "block_size": 512, 00:27:21.111 "num_blocks": 65536, 00:27:21.111 "uuid": "1fb7e281-e1e7-4557-bfc4-267e8e7d92a9", 00:27:21.111 "assigned_rate_limits": { 00:27:21.111 "rw_ios_per_sec": 0, 00:27:21.111 "rw_mbytes_per_sec": 0, 00:27:21.112 "r_mbytes_per_sec": 0, 00:27:21.112 "w_mbytes_per_sec": 0 00:27:21.112 }, 00:27:21.112 "claimed": true, 00:27:21.112 "claim_type": "exclusive_write", 00:27:21.112 "zoned": false, 00:27:21.112 "supported_io_types": { 00:27:21.112 "read": true, 00:27:21.112 "write": true, 00:27:21.112 "unmap": true, 00:27:21.112 "flush": true, 00:27:21.112 "reset": true, 00:27:21.112 "nvme_admin": false, 00:27:21.112 "nvme_io": false, 00:27:21.112 "nvme_io_md": false, 00:27:21.112 "write_zeroes": true, 00:27:21.112 "zcopy": true, 00:27:21.112 "get_zone_info": false, 00:27:21.112 "zone_management": false, 00:27:21.112 "zone_append": false, 00:27:21.112 "compare": false, 00:27:21.112 "compare_and_write": false, 00:27:21.112 "abort": true, 00:27:21.112 "seek_hole": false, 00:27:21.112 "seek_data": false, 00:27:21.112 "copy": true, 00:27:21.112 "nvme_iov_md": false 00:27:21.112 }, 00:27:21.112 "memory_domains": [ 00:27:21.112 { 00:27:21.112 "dma_device_id": "system", 00:27:21.112 "dma_device_type": 1 00:27:21.112 }, 00:27:21.112 { 00:27:21.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.112 "dma_device_type": 2 00:27:21.112 } 00:27:21.112 ], 00:27:21.112 "driver_specific": {} 00:27:21.112 } 00:27:21.112 ] 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.112 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:21.371 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:21.371 "name": "Existed_Raid", 00:27:21.371 "uuid": "07a7ab40-7d20-4fdc-917e-167430f4f1a1", 00:27:21.371 "strip_size_kb": 0, 00:27:21.371 "state": "online", 00:27:21.371 "raid_level": "raid1", 00:27:21.371 "superblock": true, 00:27:21.371 "num_base_bdevs": 4, 00:27:21.371 "num_base_bdevs_discovered": 4, 00:27:21.371 "num_base_bdevs_operational": 4, 00:27:21.371 "base_bdevs_list": [ 00:27:21.371 { 00:27:21.371 "name": "BaseBdev1", 00:27:21.371 "uuid": "5fa2ba5d-8610-4d97-bfe0-3682d4e7fc3c", 00:27:21.371 "is_configured": true, 00:27:21.371 "data_offset": 2048, 00:27:21.371 "data_size": 63488 00:27:21.371 }, 00:27:21.371 { 00:27:21.371 "name": "BaseBdev2", 00:27:21.371 "uuid": "73c5116e-9910-437a-9eed-b436294d14a1", 00:27:21.371 "is_configured": true, 00:27:21.371 "data_offset": 2048, 00:27:21.371 "data_size": 63488 00:27:21.371 }, 00:27:21.371 { 00:27:21.371 "name": "BaseBdev3", 00:27:21.371 "uuid": "27dd2821-d932-4030-af9b-14a214e82641", 00:27:21.371 "is_configured": true, 00:27:21.371 "data_offset": 2048, 00:27:21.371 "data_size": 63488 00:27:21.371 }, 00:27:21.371 { 00:27:21.371 "name": "BaseBdev4", 00:27:21.371 "uuid": "1fb7e281-e1e7-4557-bfc4-267e8e7d92a9", 00:27:21.371 "is_configured": true, 00:27:21.371 "data_offset": 2048, 00:27:21.371 "data_size": 63488 00:27:21.371 } 00:27:21.371 ] 00:27:21.371 }' 00:27:21.371 00:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:21.371 00:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:21.940 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:21.940 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:21.940 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:21.940 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:21.940 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:21.940 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:27:21.940 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:21.940 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:22.199 [2024-07-25 00:53:44.694582] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:22.199 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:22.199 "name": "Existed_Raid", 00:27:22.199 "aliases": [ 00:27:22.199 "07a7ab40-7d20-4fdc-917e-167430f4f1a1" 00:27:22.199 ], 00:27:22.199 "product_name": "Raid Volume", 00:27:22.199 "block_size": 512, 00:27:22.199 "num_blocks": 63488, 00:27:22.199 "uuid": "07a7ab40-7d20-4fdc-917e-167430f4f1a1", 00:27:22.199 "assigned_rate_limits": { 00:27:22.199 "rw_ios_per_sec": 0, 00:27:22.199 "rw_mbytes_per_sec": 0, 00:27:22.199 "r_mbytes_per_sec": 0, 00:27:22.199 "w_mbytes_per_sec": 0 00:27:22.199 }, 00:27:22.199 "claimed": false, 00:27:22.199 "zoned": false, 00:27:22.199 "supported_io_types": { 00:27:22.199 "read": true, 00:27:22.199 "write": true, 00:27:22.199 "unmap": false, 00:27:22.199 "flush": false, 00:27:22.199 "reset": true, 00:27:22.199 "nvme_admin": false, 00:27:22.199 "nvme_io": false, 00:27:22.199 "nvme_io_md": false, 00:27:22.199 "write_zeroes": true, 00:27:22.200 "zcopy": false, 00:27:22.200 "get_zone_info": false, 00:27:22.200 "zone_management": false, 00:27:22.200 "zone_append": false, 00:27:22.200 "compare": false, 00:27:22.200 "compare_and_write": false, 00:27:22.200 "abort": false, 00:27:22.200 "seek_hole": false, 00:27:22.200 "seek_data": false, 00:27:22.200 "copy": false, 00:27:22.200 "nvme_iov_md": false 00:27:22.200 }, 00:27:22.200 "memory_domains": [ 00:27:22.200 { 00:27:22.200 "dma_device_id": "system", 00:27:22.200 "dma_device_type": 1 00:27:22.200 }, 00:27:22.200 { 00:27:22.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.200 "dma_device_type": 2 00:27:22.200 }, 00:27:22.200 { 00:27:22.200 "dma_device_id": "system", 00:27:22.200 "dma_device_type": 1 00:27:22.200 }, 00:27:22.200 { 00:27:22.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.200 "dma_device_type": 2 00:27:22.200 }, 00:27:22.200 { 00:27:22.200 "dma_device_id": "system", 00:27:22.200 "dma_device_type": 1 00:27:22.200 }, 00:27:22.200 { 00:27:22.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.200 "dma_device_type": 2 00:27:22.200 }, 00:27:22.200 { 00:27:22.200 "dma_device_id": "system", 00:27:22.200 "dma_device_type": 1 00:27:22.200 }, 00:27:22.200 { 00:27:22.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.200 "dma_device_type": 2 00:27:22.200 } 00:27:22.200 ], 00:27:22.200 "driver_specific": { 00:27:22.200 "raid": { 00:27:22.200 "uuid": "07a7ab40-7d20-4fdc-917e-167430f4f1a1", 00:27:22.200 "strip_size_kb": 0, 00:27:22.200 "state": "online", 00:27:22.200 "raid_level": "raid1", 00:27:22.200 "superblock": true, 00:27:22.200 "num_base_bdevs": 4, 00:27:22.200 "num_base_bdevs_discovered": 4, 00:27:22.200 "num_base_bdevs_operational": 4, 00:27:22.200 "base_bdevs_list": [ 00:27:22.200 { 00:27:22.200 "name": "BaseBdev1", 00:27:22.200 "uuid": "5fa2ba5d-8610-4d97-bfe0-3682d4e7fc3c", 00:27:22.200 "is_configured": true, 00:27:22.200 "data_offset": 2048, 00:27:22.200 "data_size": 63488 00:27:22.200 }, 00:27:22.200 { 00:27:22.200 "name": "BaseBdev2", 00:27:22.200 "uuid": "73c5116e-9910-437a-9eed-b436294d14a1", 00:27:22.200 "is_configured": true, 00:27:22.200 "data_offset": 2048, 00:27:22.200 "data_size": 63488 00:27:22.200 }, 00:27:22.200 { 00:27:22.200 "name": "BaseBdev3", 00:27:22.200 "uuid": "27dd2821-d932-4030-af9b-14a214e82641", 00:27:22.200 "is_configured": true, 00:27:22.200 "data_offset": 2048, 00:27:22.200 "data_size": 63488 00:27:22.200 }, 00:27:22.200 { 00:27:22.200 "name": "BaseBdev4", 00:27:22.200 "uuid": "1fb7e281-e1e7-4557-bfc4-267e8e7d92a9", 00:27:22.200 "is_configured": true, 00:27:22.200 "data_offset": 2048, 00:27:22.200 "data_size": 63488 00:27:22.200 } 00:27:22.200 ] 00:27:22.200 } 00:27:22.200 } 00:27:22.200 }' 00:27:22.200 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:22.200 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:22.200 BaseBdev2 00:27:22.200 BaseBdev3 00:27:22.200 BaseBdev4' 00:27:22.200 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:22.200 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:22.200 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:22.460 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:22.460 "name": "BaseBdev1", 00:27:22.460 "aliases": [ 00:27:22.460 "5fa2ba5d-8610-4d97-bfe0-3682d4e7fc3c" 00:27:22.460 ], 00:27:22.460 "product_name": "Malloc disk", 00:27:22.460 "block_size": 512, 00:27:22.460 "num_blocks": 65536, 00:27:22.460 "uuid": "5fa2ba5d-8610-4d97-bfe0-3682d4e7fc3c", 00:27:22.460 "assigned_rate_limits": { 00:27:22.460 "rw_ios_per_sec": 0, 00:27:22.460 "rw_mbytes_per_sec": 0, 00:27:22.460 "r_mbytes_per_sec": 0, 00:27:22.460 "w_mbytes_per_sec": 0 00:27:22.460 }, 00:27:22.460 "claimed": true, 00:27:22.460 "claim_type": "exclusive_write", 00:27:22.460 "zoned": false, 00:27:22.460 "supported_io_types": { 00:27:22.460 "read": true, 00:27:22.460 "write": true, 00:27:22.460 "unmap": true, 00:27:22.460 "flush": true, 00:27:22.460 "reset": true, 00:27:22.460 "nvme_admin": false, 00:27:22.460 "nvme_io": false, 00:27:22.460 "nvme_io_md": false, 00:27:22.460 "write_zeroes": true, 00:27:22.460 "zcopy": true, 00:27:22.460 "get_zone_info": false, 00:27:22.460 "zone_management": false, 00:27:22.460 "zone_append": false, 00:27:22.460 "compare": false, 00:27:22.460 "compare_and_write": false, 00:27:22.460 "abort": true, 00:27:22.460 "seek_hole": false, 00:27:22.460 "seek_data": false, 00:27:22.460 "copy": true, 00:27:22.460 "nvme_iov_md": false 00:27:22.460 }, 00:27:22.460 "memory_domains": [ 00:27:22.460 { 00:27:22.460 "dma_device_id": "system", 00:27:22.460 "dma_device_type": 1 00:27:22.460 }, 00:27:22.460 { 00:27:22.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.460 "dma_device_type": 2 00:27:22.460 } 00:27:22.460 ], 00:27:22.460 "driver_specific": {} 00:27:22.460 }' 00:27:22.460 00:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:22.460 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:22.460 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:22.460 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:22.719 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:22.719 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:22.719 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:22.719 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:22.719 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:22.719 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:22.719 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:22.719 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:22.719 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:22.719 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:22.719 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:22.978 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:22.978 "name": "BaseBdev2", 00:27:22.978 "aliases": [ 00:27:22.978 "73c5116e-9910-437a-9eed-b436294d14a1" 00:27:22.978 ], 00:27:22.978 "product_name": "Malloc disk", 00:27:22.978 "block_size": 512, 00:27:22.978 "num_blocks": 65536, 00:27:22.978 "uuid": "73c5116e-9910-437a-9eed-b436294d14a1", 00:27:22.979 "assigned_rate_limits": { 00:27:22.979 "rw_ios_per_sec": 0, 00:27:22.979 "rw_mbytes_per_sec": 0, 00:27:22.979 "r_mbytes_per_sec": 0, 00:27:22.979 "w_mbytes_per_sec": 0 00:27:22.979 }, 00:27:22.979 "claimed": true, 00:27:22.979 "claim_type": "exclusive_write", 00:27:22.979 "zoned": false, 00:27:22.979 "supported_io_types": { 00:27:22.979 "read": true, 00:27:22.979 "write": true, 00:27:22.979 "unmap": true, 00:27:22.979 "flush": true, 00:27:22.979 "reset": true, 00:27:22.979 "nvme_admin": false, 00:27:22.979 "nvme_io": false, 00:27:22.979 "nvme_io_md": false, 00:27:22.979 "write_zeroes": true, 00:27:22.979 "zcopy": true, 00:27:22.979 "get_zone_info": false, 00:27:22.979 "zone_management": false, 00:27:22.979 "zone_append": false, 00:27:22.979 "compare": false, 00:27:22.979 "compare_and_write": false, 00:27:22.979 "abort": true, 00:27:22.979 "seek_hole": false, 00:27:22.979 "seek_data": false, 00:27:22.979 "copy": true, 00:27:22.979 "nvme_iov_md": false 00:27:22.979 }, 00:27:22.979 "memory_domains": [ 00:27:22.979 { 00:27:22.979 "dma_device_id": "system", 00:27:22.979 "dma_device_type": 1 00:27:22.979 }, 00:27:22.979 { 00:27:22.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.979 "dma_device_type": 2 00:27:22.979 } 00:27:22.979 ], 00:27:22.979 "driver_specific": {} 00:27:22.979 }' 00:27:22.979 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.238 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.238 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:23.238 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.238 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.238 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:23.238 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:23.238 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:23.238 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:23.238 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.497 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.497 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:23.497 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:23.497 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:23.497 00:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:23.756 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:23.756 "name": "BaseBdev3", 00:27:23.756 "aliases": [ 00:27:23.756 "27dd2821-d932-4030-af9b-14a214e82641" 00:27:23.756 ], 00:27:23.756 "product_name": "Malloc disk", 00:27:23.756 "block_size": 512, 00:27:23.756 "num_blocks": 65536, 00:27:23.756 "uuid": "27dd2821-d932-4030-af9b-14a214e82641", 00:27:23.756 "assigned_rate_limits": { 00:27:23.756 "rw_ios_per_sec": 0, 00:27:23.756 "rw_mbytes_per_sec": 0, 00:27:23.756 "r_mbytes_per_sec": 0, 00:27:23.756 "w_mbytes_per_sec": 0 00:27:23.756 }, 00:27:23.756 "claimed": true, 00:27:23.756 "claim_type": "exclusive_write", 00:27:23.756 "zoned": false, 00:27:23.756 "supported_io_types": { 00:27:23.756 "read": true, 00:27:23.756 "write": true, 00:27:23.756 "unmap": true, 00:27:23.756 "flush": true, 00:27:23.756 "reset": true, 00:27:23.756 "nvme_admin": false, 00:27:23.756 "nvme_io": false, 00:27:23.756 "nvme_io_md": false, 00:27:23.756 "write_zeroes": true, 00:27:23.756 "zcopy": true, 00:27:23.756 "get_zone_info": false, 00:27:23.756 "zone_management": false, 00:27:23.756 "zone_append": false, 00:27:23.756 "compare": false, 00:27:23.756 "compare_and_write": false, 00:27:23.756 "abort": true, 00:27:23.756 "seek_hole": false, 00:27:23.756 "seek_data": false, 00:27:23.756 "copy": true, 00:27:23.756 "nvme_iov_md": false 00:27:23.756 }, 00:27:23.756 "memory_domains": [ 00:27:23.756 { 00:27:23.756 "dma_device_id": "system", 00:27:23.756 "dma_device_type": 1 00:27:23.756 }, 00:27:23.756 { 00:27:23.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.756 "dma_device_type": 2 00:27:23.756 } 00:27:23.756 ], 00:27:23.756 "driver_specific": {} 00:27:23.756 }' 00:27:23.756 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.756 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.756 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:23.756 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.756 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.756 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:23.756 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:24.015 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:24.015 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:24.015 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:24.015 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:24.015 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:24.015 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:24.015 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:27:24.015 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:24.274 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:24.274 "name": "BaseBdev4", 00:27:24.274 "aliases": [ 00:27:24.274 "1fb7e281-e1e7-4557-bfc4-267e8e7d92a9" 00:27:24.274 ], 00:27:24.274 "product_name": "Malloc disk", 00:27:24.274 "block_size": 512, 00:27:24.274 "num_blocks": 65536, 00:27:24.274 "uuid": "1fb7e281-e1e7-4557-bfc4-267e8e7d92a9", 00:27:24.274 "assigned_rate_limits": { 00:27:24.274 "rw_ios_per_sec": 0, 00:27:24.274 "rw_mbytes_per_sec": 0, 00:27:24.274 "r_mbytes_per_sec": 0, 00:27:24.274 "w_mbytes_per_sec": 0 00:27:24.274 }, 00:27:24.274 "claimed": true, 00:27:24.274 "claim_type": "exclusive_write", 00:27:24.274 "zoned": false, 00:27:24.274 "supported_io_types": { 00:27:24.274 "read": true, 00:27:24.274 "write": true, 00:27:24.274 "unmap": true, 00:27:24.274 "flush": true, 00:27:24.274 "reset": true, 00:27:24.274 "nvme_admin": false, 00:27:24.274 "nvme_io": false, 00:27:24.274 "nvme_io_md": false, 00:27:24.274 "write_zeroes": true, 00:27:24.274 "zcopy": true, 00:27:24.274 "get_zone_info": false, 00:27:24.274 "zone_management": false, 00:27:24.274 "zone_append": false, 00:27:24.274 "compare": false, 00:27:24.274 "compare_and_write": false, 00:27:24.274 "abort": true, 00:27:24.274 "seek_hole": false, 00:27:24.274 "seek_data": false, 00:27:24.274 "copy": true, 00:27:24.274 "nvme_iov_md": false 00:27:24.274 }, 00:27:24.274 "memory_domains": [ 00:27:24.274 { 00:27:24.274 "dma_device_id": "system", 00:27:24.274 "dma_device_type": 1 00:27:24.274 }, 00:27:24.274 { 00:27:24.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.274 "dma_device_type": 2 00:27:24.274 } 00:27:24.274 ], 00:27:24.274 "driver_specific": {} 00:27:24.274 }' 00:27:24.274 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:24.274 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:24.536 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:24.536 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:24.536 00:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:24.536 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:24.536 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:24.536 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:24.536 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:24.536 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:24.536 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:24.795 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:24.795 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:24.795 [2024-07-25 00:53:47.434890] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:25.054 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.313 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:25.313 "name": "Existed_Raid", 00:27:25.313 "uuid": "07a7ab40-7d20-4fdc-917e-167430f4f1a1", 00:27:25.313 "strip_size_kb": 0, 00:27:25.313 "state": "online", 00:27:25.313 "raid_level": "raid1", 00:27:25.313 "superblock": true, 00:27:25.313 "num_base_bdevs": 4, 00:27:25.313 "num_base_bdevs_discovered": 3, 00:27:25.313 "num_base_bdevs_operational": 3, 00:27:25.313 "base_bdevs_list": [ 00:27:25.313 { 00:27:25.313 "name": null, 00:27:25.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.313 "is_configured": false, 00:27:25.313 "data_offset": 2048, 00:27:25.313 "data_size": 63488 00:27:25.313 }, 00:27:25.313 { 00:27:25.313 "name": "BaseBdev2", 00:27:25.313 "uuid": "73c5116e-9910-437a-9eed-b436294d14a1", 00:27:25.313 "is_configured": true, 00:27:25.313 "data_offset": 2048, 00:27:25.313 "data_size": 63488 00:27:25.313 }, 00:27:25.313 { 00:27:25.313 "name": "BaseBdev3", 00:27:25.313 "uuid": "27dd2821-d932-4030-af9b-14a214e82641", 00:27:25.313 "is_configured": true, 00:27:25.313 "data_offset": 2048, 00:27:25.313 "data_size": 63488 00:27:25.313 }, 00:27:25.313 { 00:27:25.313 "name": "BaseBdev4", 00:27:25.313 "uuid": "1fb7e281-e1e7-4557-bfc4-267e8e7d92a9", 00:27:25.313 "is_configured": true, 00:27:25.313 "data_offset": 2048, 00:27:25.313 "data_size": 63488 00:27:25.313 } 00:27:25.313 ] 00:27:25.313 }' 00:27:25.313 00:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:25.313 00:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.880 00:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:25.880 00:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:25.880 00:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.880 00:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:26.138 00:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:26.138 00:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:26.138 00:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:26.396 [2024-07-25 00:53:48.816736] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:26.396 00:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:26.396 00:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:26.397 00:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.397 00:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:26.655 00:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:26.655 00:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:26.655 00:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:26.914 [2024-07-25 00:53:49.489183] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:27.173 00:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:27.173 00:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:27.173 00:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.173 00:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:27.432 00:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:27.432 00:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:27.432 00:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:27.432 [2024-07-25 00:53:50.078733] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:27.432 [2024-07-25 00:53:50.078872] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:27.691 [2024-07-25 00:53:50.179987] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:27.691 [2024-07-25 00:53:50.180048] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:27.691 [2024-07-25 00:53:50.180057] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:27:27.691 00:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:27.691 00:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:27.691 00:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.691 00:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:27.950 00:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:27.950 00:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:27.950 00:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:27:27.950 00:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:27:27.950 00:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:27.950 00:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:28.209 BaseBdev2 00:27:28.209 00:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:27:28.209 00:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:28.209 00:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:28.209 00:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:28.209 00:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:28.209 00:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:28.209 00:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:28.209 00:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:28.467 [ 00:27:28.467 { 00:27:28.467 "name": "BaseBdev2", 00:27:28.467 "aliases": [ 00:27:28.467 "9e281621-1d5c-40f2-ad56-74934f1c303a" 00:27:28.467 ], 00:27:28.467 "product_name": "Malloc disk", 00:27:28.467 "block_size": 512, 00:27:28.467 "num_blocks": 65536, 00:27:28.467 "uuid": "9e281621-1d5c-40f2-ad56-74934f1c303a", 00:27:28.467 "assigned_rate_limits": { 00:27:28.467 "rw_ios_per_sec": 0, 00:27:28.467 "rw_mbytes_per_sec": 0, 00:27:28.467 "r_mbytes_per_sec": 0, 00:27:28.467 "w_mbytes_per_sec": 0 00:27:28.467 }, 00:27:28.467 "claimed": false, 00:27:28.467 "zoned": false, 00:27:28.467 "supported_io_types": { 00:27:28.467 "read": true, 00:27:28.467 "write": true, 00:27:28.467 "unmap": true, 00:27:28.467 "flush": true, 00:27:28.467 "reset": true, 00:27:28.467 "nvme_admin": false, 00:27:28.467 "nvme_io": false, 00:27:28.467 "nvme_io_md": false, 00:27:28.467 "write_zeroes": true, 00:27:28.467 "zcopy": true, 00:27:28.467 "get_zone_info": false, 00:27:28.467 "zone_management": false, 00:27:28.467 "zone_append": false, 00:27:28.467 "compare": false, 00:27:28.467 "compare_and_write": false, 00:27:28.467 "abort": true, 00:27:28.467 "seek_hole": false, 00:27:28.467 "seek_data": false, 00:27:28.467 "copy": true, 00:27:28.467 "nvme_iov_md": false 00:27:28.467 }, 00:27:28.467 "memory_domains": [ 00:27:28.467 { 00:27:28.467 "dma_device_id": "system", 00:27:28.467 "dma_device_type": 1 00:27:28.467 }, 00:27:28.467 { 00:27:28.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.467 "dma_device_type": 2 00:27:28.467 } 00:27:28.467 ], 00:27:28.467 "driver_specific": {} 00:27:28.467 } 00:27:28.467 ] 00:27:28.467 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:28.467 00:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:28.467 00:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:28.467 00:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:28.726 BaseBdev3 00:27:28.726 00:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:27:28.726 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:27:28.726 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:28.726 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:28.726 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:28.726 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:28.726 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:28.986 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:29.245 [ 00:27:29.245 { 00:27:29.245 "name": "BaseBdev3", 00:27:29.245 "aliases": [ 00:27:29.245 "349f5615-780c-458a-a634-e52410c6590a" 00:27:29.245 ], 00:27:29.245 "product_name": "Malloc disk", 00:27:29.245 "block_size": 512, 00:27:29.245 "num_blocks": 65536, 00:27:29.245 "uuid": "349f5615-780c-458a-a634-e52410c6590a", 00:27:29.245 "assigned_rate_limits": { 00:27:29.245 "rw_ios_per_sec": 0, 00:27:29.245 "rw_mbytes_per_sec": 0, 00:27:29.245 "r_mbytes_per_sec": 0, 00:27:29.245 "w_mbytes_per_sec": 0 00:27:29.245 }, 00:27:29.245 "claimed": false, 00:27:29.245 "zoned": false, 00:27:29.245 "supported_io_types": { 00:27:29.245 "read": true, 00:27:29.245 "write": true, 00:27:29.245 "unmap": true, 00:27:29.245 "flush": true, 00:27:29.245 "reset": true, 00:27:29.245 "nvme_admin": false, 00:27:29.245 "nvme_io": false, 00:27:29.245 "nvme_io_md": false, 00:27:29.245 "write_zeroes": true, 00:27:29.245 "zcopy": true, 00:27:29.245 "get_zone_info": false, 00:27:29.245 "zone_management": false, 00:27:29.245 "zone_append": false, 00:27:29.245 "compare": false, 00:27:29.245 "compare_and_write": false, 00:27:29.245 "abort": true, 00:27:29.245 "seek_hole": false, 00:27:29.245 "seek_data": false, 00:27:29.245 "copy": true, 00:27:29.245 "nvme_iov_md": false 00:27:29.245 }, 00:27:29.245 "memory_domains": [ 00:27:29.245 { 00:27:29.245 "dma_device_id": "system", 00:27:29.245 "dma_device_type": 1 00:27:29.245 }, 00:27:29.245 { 00:27:29.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.245 "dma_device_type": 2 00:27:29.245 } 00:27:29.245 ], 00:27:29.245 "driver_specific": {} 00:27:29.245 } 00:27:29.245 ] 00:27:29.245 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:29.245 00:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:29.245 00:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:29.245 00:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:29.245 BaseBdev4 00:27:29.245 00:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:27:29.245 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:27:29.245 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:29.245 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:29.245 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:29.245 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:29.245 00:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:29.504 00:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:29.763 [ 00:27:29.763 { 00:27:29.763 "name": "BaseBdev4", 00:27:29.763 "aliases": [ 00:27:29.763 "31d27a2f-de92-4ab5-9f20-be5a58e4ce96" 00:27:29.763 ], 00:27:29.763 "product_name": "Malloc disk", 00:27:29.763 "block_size": 512, 00:27:29.763 "num_blocks": 65536, 00:27:29.763 "uuid": "31d27a2f-de92-4ab5-9f20-be5a58e4ce96", 00:27:29.763 "assigned_rate_limits": { 00:27:29.763 "rw_ios_per_sec": 0, 00:27:29.763 "rw_mbytes_per_sec": 0, 00:27:29.763 "r_mbytes_per_sec": 0, 00:27:29.763 "w_mbytes_per_sec": 0 00:27:29.763 }, 00:27:29.764 "claimed": false, 00:27:29.764 "zoned": false, 00:27:29.764 "supported_io_types": { 00:27:29.764 "read": true, 00:27:29.764 "write": true, 00:27:29.764 "unmap": true, 00:27:29.764 "flush": true, 00:27:29.764 "reset": true, 00:27:29.764 "nvme_admin": false, 00:27:29.764 "nvme_io": false, 00:27:29.764 "nvme_io_md": false, 00:27:29.764 "write_zeroes": true, 00:27:29.764 "zcopy": true, 00:27:29.764 "get_zone_info": false, 00:27:29.764 "zone_management": false, 00:27:29.764 "zone_append": false, 00:27:29.764 "compare": false, 00:27:29.764 "compare_and_write": false, 00:27:29.764 "abort": true, 00:27:29.764 "seek_hole": false, 00:27:29.764 "seek_data": false, 00:27:29.764 "copy": true, 00:27:29.764 "nvme_iov_md": false 00:27:29.764 }, 00:27:29.764 "memory_domains": [ 00:27:29.764 { 00:27:29.764 "dma_device_id": "system", 00:27:29.764 "dma_device_type": 1 00:27:29.764 }, 00:27:29.764 { 00:27:29.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.764 "dma_device_type": 2 00:27:29.764 } 00:27:29.764 ], 00:27:29.764 "driver_specific": {} 00:27:29.764 } 00:27:29.764 ] 00:27:29.764 00:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:29.764 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:29.764 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:29.764 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:30.023 [2024-07-25 00:53:52.512680] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:30.023 [2024-07-25 00:53:52.512759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:30.023 [2024-07-25 00:53:52.512785] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:30.023 [2024-07-25 00:53:52.514714] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:30.023 [2024-07-25 00:53:52.514765] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.023 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:30.282 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:30.282 "name": "Existed_Raid", 00:27:30.282 "uuid": "568d291c-a282-4504-9144-64a8639e298c", 00:27:30.282 "strip_size_kb": 0, 00:27:30.282 "state": "configuring", 00:27:30.282 "raid_level": "raid1", 00:27:30.282 "superblock": true, 00:27:30.282 "num_base_bdevs": 4, 00:27:30.282 "num_base_bdevs_discovered": 3, 00:27:30.282 "num_base_bdevs_operational": 4, 00:27:30.282 "base_bdevs_list": [ 00:27:30.282 { 00:27:30.282 "name": "BaseBdev1", 00:27:30.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.282 "is_configured": false, 00:27:30.282 "data_offset": 0, 00:27:30.282 "data_size": 0 00:27:30.282 }, 00:27:30.282 { 00:27:30.282 "name": "BaseBdev2", 00:27:30.282 "uuid": "9e281621-1d5c-40f2-ad56-74934f1c303a", 00:27:30.282 "is_configured": true, 00:27:30.282 "data_offset": 2048, 00:27:30.282 "data_size": 63488 00:27:30.282 }, 00:27:30.282 { 00:27:30.282 "name": "BaseBdev3", 00:27:30.282 "uuid": "349f5615-780c-458a-a634-e52410c6590a", 00:27:30.282 "is_configured": true, 00:27:30.282 "data_offset": 2048, 00:27:30.282 "data_size": 63488 00:27:30.282 }, 00:27:30.282 { 00:27:30.282 "name": "BaseBdev4", 00:27:30.282 "uuid": "31d27a2f-de92-4ab5-9f20-be5a58e4ce96", 00:27:30.282 "is_configured": true, 00:27:30.282 "data_offset": 2048, 00:27:30.282 "data_size": 63488 00:27:30.282 } 00:27:30.282 ] 00:27:30.282 }' 00:27:30.282 00:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:30.282 00:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:30.852 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:31.111 [2024-07-25 00:53:53.584831] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.111 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:31.370 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:31.370 "name": "Existed_Raid", 00:27:31.370 "uuid": "568d291c-a282-4504-9144-64a8639e298c", 00:27:31.370 "strip_size_kb": 0, 00:27:31.370 "state": "configuring", 00:27:31.370 "raid_level": "raid1", 00:27:31.370 "superblock": true, 00:27:31.370 "num_base_bdevs": 4, 00:27:31.370 "num_base_bdevs_discovered": 2, 00:27:31.370 "num_base_bdevs_operational": 4, 00:27:31.370 "base_bdevs_list": [ 00:27:31.370 { 00:27:31.370 "name": "BaseBdev1", 00:27:31.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.370 "is_configured": false, 00:27:31.370 "data_offset": 0, 00:27:31.370 "data_size": 0 00:27:31.370 }, 00:27:31.370 { 00:27:31.370 "name": null, 00:27:31.370 "uuid": "9e281621-1d5c-40f2-ad56-74934f1c303a", 00:27:31.370 "is_configured": false, 00:27:31.370 "data_offset": 2048, 00:27:31.370 "data_size": 63488 00:27:31.370 }, 00:27:31.370 { 00:27:31.370 "name": "BaseBdev3", 00:27:31.370 "uuid": "349f5615-780c-458a-a634-e52410c6590a", 00:27:31.370 "is_configured": true, 00:27:31.370 "data_offset": 2048, 00:27:31.370 "data_size": 63488 00:27:31.370 }, 00:27:31.370 { 00:27:31.370 "name": "BaseBdev4", 00:27:31.370 "uuid": "31d27a2f-de92-4ab5-9f20-be5a58e4ce96", 00:27:31.370 "is_configured": true, 00:27:31.370 "data_offset": 2048, 00:27:31.370 "data_size": 63488 00:27:31.370 } 00:27:31.370 ] 00:27:31.370 }' 00:27:31.370 00:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:31.370 00:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:31.939 00:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.939 00:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:32.198 00:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:27:32.198 00:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:32.457 [2024-07-25 00:53:54.882241] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:32.457 BaseBdev1 00:27:32.457 00:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:27:32.457 00:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:32.457 00:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:32.457 00:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:32.457 00:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:32.457 00:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:32.457 00:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:32.457 00:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:32.716 [ 00:27:32.716 { 00:27:32.716 "name": "BaseBdev1", 00:27:32.716 "aliases": [ 00:27:32.716 "fbb654d3-285e-482a-8f97-f2fde56f2fc0" 00:27:32.716 ], 00:27:32.716 "product_name": "Malloc disk", 00:27:32.716 "block_size": 512, 00:27:32.716 "num_blocks": 65536, 00:27:32.716 "uuid": "fbb654d3-285e-482a-8f97-f2fde56f2fc0", 00:27:32.716 "assigned_rate_limits": { 00:27:32.716 "rw_ios_per_sec": 0, 00:27:32.716 "rw_mbytes_per_sec": 0, 00:27:32.716 "r_mbytes_per_sec": 0, 00:27:32.716 "w_mbytes_per_sec": 0 00:27:32.716 }, 00:27:32.716 "claimed": true, 00:27:32.716 "claim_type": "exclusive_write", 00:27:32.716 "zoned": false, 00:27:32.716 "supported_io_types": { 00:27:32.716 "read": true, 00:27:32.716 "write": true, 00:27:32.716 "unmap": true, 00:27:32.716 "flush": true, 00:27:32.716 "reset": true, 00:27:32.716 "nvme_admin": false, 00:27:32.716 "nvme_io": false, 00:27:32.716 "nvme_io_md": false, 00:27:32.716 "write_zeroes": true, 00:27:32.716 "zcopy": true, 00:27:32.716 "get_zone_info": false, 00:27:32.716 "zone_management": false, 00:27:32.716 "zone_append": false, 00:27:32.716 "compare": false, 00:27:32.716 "compare_and_write": false, 00:27:32.716 "abort": true, 00:27:32.716 "seek_hole": false, 00:27:32.716 "seek_data": false, 00:27:32.716 "copy": true, 00:27:32.716 "nvme_iov_md": false 00:27:32.716 }, 00:27:32.716 "memory_domains": [ 00:27:32.716 { 00:27:32.717 "dma_device_id": "system", 00:27:32.717 "dma_device_type": 1 00:27:32.717 }, 00:27:32.717 { 00:27:32.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:32.717 "dma_device_type": 2 00:27:32.717 } 00:27:32.717 ], 00:27:32.717 "driver_specific": {} 00:27:32.717 } 00:27:32.717 ] 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.717 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:32.975 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:32.976 "name": "Existed_Raid", 00:27:32.976 "uuid": "568d291c-a282-4504-9144-64a8639e298c", 00:27:32.976 "strip_size_kb": 0, 00:27:32.976 "state": "configuring", 00:27:32.976 "raid_level": "raid1", 00:27:32.976 "superblock": true, 00:27:32.976 "num_base_bdevs": 4, 00:27:32.976 "num_base_bdevs_discovered": 3, 00:27:32.976 "num_base_bdevs_operational": 4, 00:27:32.976 "base_bdevs_list": [ 00:27:32.976 { 00:27:32.976 "name": "BaseBdev1", 00:27:32.976 "uuid": "fbb654d3-285e-482a-8f97-f2fde56f2fc0", 00:27:32.976 "is_configured": true, 00:27:32.976 "data_offset": 2048, 00:27:32.976 "data_size": 63488 00:27:32.976 }, 00:27:32.976 { 00:27:32.976 "name": null, 00:27:32.976 "uuid": "9e281621-1d5c-40f2-ad56-74934f1c303a", 00:27:32.976 "is_configured": false, 00:27:32.976 "data_offset": 2048, 00:27:32.976 "data_size": 63488 00:27:32.976 }, 00:27:32.976 { 00:27:32.976 "name": "BaseBdev3", 00:27:32.976 "uuid": "349f5615-780c-458a-a634-e52410c6590a", 00:27:32.976 "is_configured": true, 00:27:32.976 "data_offset": 2048, 00:27:32.976 "data_size": 63488 00:27:32.976 }, 00:27:32.976 { 00:27:32.976 "name": "BaseBdev4", 00:27:32.976 "uuid": "31d27a2f-de92-4ab5-9f20-be5a58e4ce96", 00:27:32.976 "is_configured": true, 00:27:32.976 "data_offset": 2048, 00:27:32.976 "data_size": 63488 00:27:32.976 } 00:27:32.976 ] 00:27:32.976 }' 00:27:32.976 00:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:32.976 00:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:33.544 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.544 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:33.803 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:27:33.803 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:27:34.062 [2024-07-25 00:53:56.506779] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.062 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:34.321 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.321 "name": "Existed_Raid", 00:27:34.321 "uuid": "568d291c-a282-4504-9144-64a8639e298c", 00:27:34.321 "strip_size_kb": 0, 00:27:34.321 "state": "configuring", 00:27:34.321 "raid_level": "raid1", 00:27:34.321 "superblock": true, 00:27:34.321 "num_base_bdevs": 4, 00:27:34.321 "num_base_bdevs_discovered": 2, 00:27:34.321 "num_base_bdevs_operational": 4, 00:27:34.321 "base_bdevs_list": [ 00:27:34.321 { 00:27:34.321 "name": "BaseBdev1", 00:27:34.321 "uuid": "fbb654d3-285e-482a-8f97-f2fde56f2fc0", 00:27:34.321 "is_configured": true, 00:27:34.321 "data_offset": 2048, 00:27:34.321 "data_size": 63488 00:27:34.321 }, 00:27:34.321 { 00:27:34.321 "name": null, 00:27:34.321 "uuid": "9e281621-1d5c-40f2-ad56-74934f1c303a", 00:27:34.321 "is_configured": false, 00:27:34.321 "data_offset": 2048, 00:27:34.321 "data_size": 63488 00:27:34.321 }, 00:27:34.321 { 00:27:34.321 "name": null, 00:27:34.321 "uuid": "349f5615-780c-458a-a634-e52410c6590a", 00:27:34.321 "is_configured": false, 00:27:34.321 "data_offset": 2048, 00:27:34.321 "data_size": 63488 00:27:34.321 }, 00:27:34.321 { 00:27:34.321 "name": "BaseBdev4", 00:27:34.321 "uuid": "31d27a2f-de92-4ab5-9f20-be5a58e4ce96", 00:27:34.321 "is_configured": true, 00:27:34.321 "data_offset": 2048, 00:27:34.321 "data_size": 63488 00:27:34.321 } 00:27:34.321 ] 00:27:34.321 }' 00:27:34.321 00:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.321 00:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:34.915 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:34.915 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:35.175 [2024-07-25 00:53:57.739026] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.175 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:35.434 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:35.434 "name": "Existed_Raid", 00:27:35.434 "uuid": "568d291c-a282-4504-9144-64a8639e298c", 00:27:35.434 "strip_size_kb": 0, 00:27:35.434 "state": "configuring", 00:27:35.434 "raid_level": "raid1", 00:27:35.434 "superblock": true, 00:27:35.434 "num_base_bdevs": 4, 00:27:35.434 "num_base_bdevs_discovered": 3, 00:27:35.434 "num_base_bdevs_operational": 4, 00:27:35.434 "base_bdevs_list": [ 00:27:35.434 { 00:27:35.434 "name": "BaseBdev1", 00:27:35.434 "uuid": "fbb654d3-285e-482a-8f97-f2fde56f2fc0", 00:27:35.434 "is_configured": true, 00:27:35.434 "data_offset": 2048, 00:27:35.434 "data_size": 63488 00:27:35.434 }, 00:27:35.434 { 00:27:35.434 "name": null, 00:27:35.434 "uuid": "9e281621-1d5c-40f2-ad56-74934f1c303a", 00:27:35.434 "is_configured": false, 00:27:35.434 "data_offset": 2048, 00:27:35.434 "data_size": 63488 00:27:35.434 }, 00:27:35.434 { 00:27:35.434 "name": "BaseBdev3", 00:27:35.434 "uuid": "349f5615-780c-458a-a634-e52410c6590a", 00:27:35.434 "is_configured": true, 00:27:35.434 "data_offset": 2048, 00:27:35.434 "data_size": 63488 00:27:35.434 }, 00:27:35.434 { 00:27:35.434 "name": "BaseBdev4", 00:27:35.434 "uuid": "31d27a2f-de92-4ab5-9f20-be5a58e4ce96", 00:27:35.434 "is_configured": true, 00:27:35.434 "data_offset": 2048, 00:27:35.434 "data_size": 63488 00:27:35.434 } 00:27:35.434 ] 00:27:35.434 }' 00:27:35.434 00:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:35.434 00:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.001 00:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.001 00:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:36.260 00:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:27:36.260 00:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:36.518 [2024-07-25 00:53:58.919264] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:36.518 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:36.518 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:36.518 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:36.518 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:36.518 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:36.518 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:36.518 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:36.518 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:36.518 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:36.518 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:36.518 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.519 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:36.778 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:36.778 "name": "Existed_Raid", 00:27:36.778 "uuid": "568d291c-a282-4504-9144-64a8639e298c", 00:27:36.778 "strip_size_kb": 0, 00:27:36.778 "state": "configuring", 00:27:36.778 "raid_level": "raid1", 00:27:36.778 "superblock": true, 00:27:36.778 "num_base_bdevs": 4, 00:27:36.778 "num_base_bdevs_discovered": 2, 00:27:36.778 "num_base_bdevs_operational": 4, 00:27:36.778 "base_bdevs_list": [ 00:27:36.778 { 00:27:36.778 "name": null, 00:27:36.778 "uuid": "fbb654d3-285e-482a-8f97-f2fde56f2fc0", 00:27:36.778 "is_configured": false, 00:27:36.778 "data_offset": 2048, 00:27:36.778 "data_size": 63488 00:27:36.778 }, 00:27:36.778 { 00:27:36.778 "name": null, 00:27:36.778 "uuid": "9e281621-1d5c-40f2-ad56-74934f1c303a", 00:27:36.778 "is_configured": false, 00:27:36.778 "data_offset": 2048, 00:27:36.778 "data_size": 63488 00:27:36.778 }, 00:27:36.778 { 00:27:36.778 "name": "BaseBdev3", 00:27:36.778 "uuid": "349f5615-780c-458a-a634-e52410c6590a", 00:27:36.778 "is_configured": true, 00:27:36.778 "data_offset": 2048, 00:27:36.778 "data_size": 63488 00:27:36.778 }, 00:27:36.778 { 00:27:36.778 "name": "BaseBdev4", 00:27:36.778 "uuid": "31d27a2f-de92-4ab5-9f20-be5a58e4ce96", 00:27:36.778 "is_configured": true, 00:27:36.778 "data_offset": 2048, 00:27:36.778 "data_size": 63488 00:27:36.778 } 00:27:36.778 ] 00:27:36.778 }' 00:27:36.778 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:36.778 00:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.345 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:37.345 00:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.604 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:27:37.604 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:37.862 [2024-07-25 00:54:00.299646] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.862 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:38.121 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:38.121 "name": "Existed_Raid", 00:27:38.121 "uuid": "568d291c-a282-4504-9144-64a8639e298c", 00:27:38.121 "strip_size_kb": 0, 00:27:38.121 "state": "configuring", 00:27:38.121 "raid_level": "raid1", 00:27:38.121 "superblock": true, 00:27:38.121 "num_base_bdevs": 4, 00:27:38.121 "num_base_bdevs_discovered": 3, 00:27:38.121 "num_base_bdevs_operational": 4, 00:27:38.121 "base_bdevs_list": [ 00:27:38.121 { 00:27:38.121 "name": null, 00:27:38.121 "uuid": "fbb654d3-285e-482a-8f97-f2fde56f2fc0", 00:27:38.121 "is_configured": false, 00:27:38.121 "data_offset": 2048, 00:27:38.121 "data_size": 63488 00:27:38.121 }, 00:27:38.121 { 00:27:38.121 "name": "BaseBdev2", 00:27:38.121 "uuid": "9e281621-1d5c-40f2-ad56-74934f1c303a", 00:27:38.121 "is_configured": true, 00:27:38.121 "data_offset": 2048, 00:27:38.121 "data_size": 63488 00:27:38.121 }, 00:27:38.121 { 00:27:38.121 "name": "BaseBdev3", 00:27:38.121 "uuid": "349f5615-780c-458a-a634-e52410c6590a", 00:27:38.121 "is_configured": true, 00:27:38.121 "data_offset": 2048, 00:27:38.121 "data_size": 63488 00:27:38.121 }, 00:27:38.121 { 00:27:38.121 "name": "BaseBdev4", 00:27:38.121 "uuid": "31d27a2f-de92-4ab5-9f20-be5a58e4ce96", 00:27:38.121 "is_configured": true, 00:27:38.121 "data_offset": 2048, 00:27:38.121 "data_size": 63488 00:27:38.121 } 00:27:38.121 ] 00:27:38.121 }' 00:27:38.121 00:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:38.121 00:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.689 00:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.689 00:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:38.948 00:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:27:38.948 00:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.948 00:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:38.948 00:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u fbb654d3-285e-482a-8f97-f2fde56f2fc0 00:27:39.206 [2024-07-25 00:54:01.811412] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:39.206 [2024-07-25 00:54:01.811612] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:27:39.206 [2024-07-25 00:54:01.811624] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:39.206 [2024-07-25 00:54:01.811722] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:39.206 [2024-07-25 00:54:01.812002] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:27:39.206 [2024-07-25 00:54:01.812014] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:27:39.206 [2024-07-25 00:54:01.812140] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:39.206 NewBaseBdev 00:27:39.206 00:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:27:39.206 00:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:27:39.206 00:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:39.206 00:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:39.206 00:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:39.206 00:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:39.206 00:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:39.464 00:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:39.722 [ 00:27:39.722 { 00:27:39.722 "name": "NewBaseBdev", 00:27:39.722 "aliases": [ 00:27:39.722 "fbb654d3-285e-482a-8f97-f2fde56f2fc0" 00:27:39.722 ], 00:27:39.722 "product_name": "Malloc disk", 00:27:39.722 "block_size": 512, 00:27:39.722 "num_blocks": 65536, 00:27:39.722 "uuid": "fbb654d3-285e-482a-8f97-f2fde56f2fc0", 00:27:39.722 "assigned_rate_limits": { 00:27:39.722 "rw_ios_per_sec": 0, 00:27:39.722 "rw_mbytes_per_sec": 0, 00:27:39.722 "r_mbytes_per_sec": 0, 00:27:39.722 "w_mbytes_per_sec": 0 00:27:39.722 }, 00:27:39.722 "claimed": true, 00:27:39.722 "claim_type": "exclusive_write", 00:27:39.722 "zoned": false, 00:27:39.722 "supported_io_types": { 00:27:39.722 "read": true, 00:27:39.722 "write": true, 00:27:39.722 "unmap": true, 00:27:39.722 "flush": true, 00:27:39.722 "reset": true, 00:27:39.722 "nvme_admin": false, 00:27:39.722 "nvme_io": false, 00:27:39.722 "nvme_io_md": false, 00:27:39.722 "write_zeroes": true, 00:27:39.722 "zcopy": true, 00:27:39.722 "get_zone_info": false, 00:27:39.722 "zone_management": false, 00:27:39.722 "zone_append": false, 00:27:39.722 "compare": false, 00:27:39.722 "compare_and_write": false, 00:27:39.722 "abort": true, 00:27:39.722 "seek_hole": false, 00:27:39.722 "seek_data": false, 00:27:39.722 "copy": true, 00:27:39.722 "nvme_iov_md": false 00:27:39.722 }, 00:27:39.722 "memory_domains": [ 00:27:39.722 { 00:27:39.722 "dma_device_id": "system", 00:27:39.722 "dma_device_type": 1 00:27:39.722 }, 00:27:39.722 { 00:27:39.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.722 "dma_device_type": 2 00:27:39.722 } 00:27:39.722 ], 00:27:39.722 "driver_specific": {} 00:27:39.722 } 00:27:39.722 ] 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.722 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:39.981 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:39.981 "name": "Existed_Raid", 00:27:39.981 "uuid": "568d291c-a282-4504-9144-64a8639e298c", 00:27:39.981 "strip_size_kb": 0, 00:27:39.981 "state": "online", 00:27:39.981 "raid_level": "raid1", 00:27:39.981 "superblock": true, 00:27:39.981 "num_base_bdevs": 4, 00:27:39.981 "num_base_bdevs_discovered": 4, 00:27:39.981 "num_base_bdevs_operational": 4, 00:27:39.981 "base_bdevs_list": [ 00:27:39.981 { 00:27:39.981 "name": "NewBaseBdev", 00:27:39.981 "uuid": "fbb654d3-285e-482a-8f97-f2fde56f2fc0", 00:27:39.981 "is_configured": true, 00:27:39.981 "data_offset": 2048, 00:27:39.981 "data_size": 63488 00:27:39.981 }, 00:27:39.981 { 00:27:39.981 "name": "BaseBdev2", 00:27:39.981 "uuid": "9e281621-1d5c-40f2-ad56-74934f1c303a", 00:27:39.981 "is_configured": true, 00:27:39.981 "data_offset": 2048, 00:27:39.981 "data_size": 63488 00:27:39.981 }, 00:27:39.981 { 00:27:39.981 "name": "BaseBdev3", 00:27:39.981 "uuid": "349f5615-780c-458a-a634-e52410c6590a", 00:27:39.981 "is_configured": true, 00:27:39.981 "data_offset": 2048, 00:27:39.981 "data_size": 63488 00:27:39.981 }, 00:27:39.981 { 00:27:39.981 "name": "BaseBdev4", 00:27:39.981 "uuid": "31d27a2f-de92-4ab5-9f20-be5a58e4ce96", 00:27:39.981 "is_configured": true, 00:27:39.981 "data_offset": 2048, 00:27:39.981 "data_size": 63488 00:27:39.981 } 00:27:39.981 ] 00:27:39.981 }' 00:27:39.981 00:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:39.981 00:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.549 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:27:40.549 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:40.549 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:40.549 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:40.549 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:40.549 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:27:40.549 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:40.549 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:40.807 [2024-07-25 00:54:03.232003] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:40.807 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:40.807 "name": "Existed_Raid", 00:27:40.807 "aliases": [ 00:27:40.807 "568d291c-a282-4504-9144-64a8639e298c" 00:27:40.807 ], 00:27:40.807 "product_name": "Raid Volume", 00:27:40.807 "block_size": 512, 00:27:40.807 "num_blocks": 63488, 00:27:40.807 "uuid": "568d291c-a282-4504-9144-64a8639e298c", 00:27:40.807 "assigned_rate_limits": { 00:27:40.807 "rw_ios_per_sec": 0, 00:27:40.807 "rw_mbytes_per_sec": 0, 00:27:40.807 "r_mbytes_per_sec": 0, 00:27:40.808 "w_mbytes_per_sec": 0 00:27:40.808 }, 00:27:40.808 "claimed": false, 00:27:40.808 "zoned": false, 00:27:40.808 "supported_io_types": { 00:27:40.808 "read": true, 00:27:40.808 "write": true, 00:27:40.808 "unmap": false, 00:27:40.808 "flush": false, 00:27:40.808 "reset": true, 00:27:40.808 "nvme_admin": false, 00:27:40.808 "nvme_io": false, 00:27:40.808 "nvme_io_md": false, 00:27:40.808 "write_zeroes": true, 00:27:40.808 "zcopy": false, 00:27:40.808 "get_zone_info": false, 00:27:40.808 "zone_management": false, 00:27:40.808 "zone_append": false, 00:27:40.808 "compare": false, 00:27:40.808 "compare_and_write": false, 00:27:40.808 "abort": false, 00:27:40.808 "seek_hole": false, 00:27:40.808 "seek_data": false, 00:27:40.808 "copy": false, 00:27:40.808 "nvme_iov_md": false 00:27:40.808 }, 00:27:40.808 "memory_domains": [ 00:27:40.808 { 00:27:40.808 "dma_device_id": "system", 00:27:40.808 "dma_device_type": 1 00:27:40.808 }, 00:27:40.808 { 00:27:40.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.808 "dma_device_type": 2 00:27:40.808 }, 00:27:40.808 { 00:27:40.808 "dma_device_id": "system", 00:27:40.808 "dma_device_type": 1 00:27:40.808 }, 00:27:40.808 { 00:27:40.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.808 "dma_device_type": 2 00:27:40.808 }, 00:27:40.808 { 00:27:40.808 "dma_device_id": "system", 00:27:40.808 "dma_device_type": 1 00:27:40.808 }, 00:27:40.808 { 00:27:40.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.808 "dma_device_type": 2 00:27:40.808 }, 00:27:40.808 { 00:27:40.808 "dma_device_id": "system", 00:27:40.808 "dma_device_type": 1 00:27:40.808 }, 00:27:40.808 { 00:27:40.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.808 "dma_device_type": 2 00:27:40.808 } 00:27:40.808 ], 00:27:40.808 "driver_specific": { 00:27:40.808 "raid": { 00:27:40.808 "uuid": "568d291c-a282-4504-9144-64a8639e298c", 00:27:40.808 "strip_size_kb": 0, 00:27:40.808 "state": "online", 00:27:40.808 "raid_level": "raid1", 00:27:40.808 "superblock": true, 00:27:40.808 "num_base_bdevs": 4, 00:27:40.808 "num_base_bdevs_discovered": 4, 00:27:40.808 "num_base_bdevs_operational": 4, 00:27:40.808 "base_bdevs_list": [ 00:27:40.808 { 00:27:40.808 "name": "NewBaseBdev", 00:27:40.808 "uuid": "fbb654d3-285e-482a-8f97-f2fde56f2fc0", 00:27:40.808 "is_configured": true, 00:27:40.808 "data_offset": 2048, 00:27:40.808 "data_size": 63488 00:27:40.808 }, 00:27:40.808 { 00:27:40.808 "name": "BaseBdev2", 00:27:40.808 "uuid": "9e281621-1d5c-40f2-ad56-74934f1c303a", 00:27:40.808 "is_configured": true, 00:27:40.808 "data_offset": 2048, 00:27:40.808 "data_size": 63488 00:27:40.808 }, 00:27:40.808 { 00:27:40.808 "name": "BaseBdev3", 00:27:40.808 "uuid": "349f5615-780c-458a-a634-e52410c6590a", 00:27:40.808 "is_configured": true, 00:27:40.808 "data_offset": 2048, 00:27:40.808 "data_size": 63488 00:27:40.808 }, 00:27:40.808 { 00:27:40.808 "name": "BaseBdev4", 00:27:40.808 "uuid": "31d27a2f-de92-4ab5-9f20-be5a58e4ce96", 00:27:40.808 "is_configured": true, 00:27:40.808 "data_offset": 2048, 00:27:40.808 "data_size": 63488 00:27:40.808 } 00:27:40.808 ] 00:27:40.808 } 00:27:40.808 } 00:27:40.808 }' 00:27:40.808 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:40.808 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:27:40.808 BaseBdev2 00:27:40.808 BaseBdev3 00:27:40.808 BaseBdev4' 00:27:40.808 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:40.808 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:40.808 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:27:41.066 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:41.066 "name": "NewBaseBdev", 00:27:41.066 "aliases": [ 00:27:41.066 "fbb654d3-285e-482a-8f97-f2fde56f2fc0" 00:27:41.066 ], 00:27:41.066 "product_name": "Malloc disk", 00:27:41.066 "block_size": 512, 00:27:41.066 "num_blocks": 65536, 00:27:41.066 "uuid": "fbb654d3-285e-482a-8f97-f2fde56f2fc0", 00:27:41.066 "assigned_rate_limits": { 00:27:41.066 "rw_ios_per_sec": 0, 00:27:41.066 "rw_mbytes_per_sec": 0, 00:27:41.066 "r_mbytes_per_sec": 0, 00:27:41.066 "w_mbytes_per_sec": 0 00:27:41.066 }, 00:27:41.066 "claimed": true, 00:27:41.066 "claim_type": "exclusive_write", 00:27:41.066 "zoned": false, 00:27:41.066 "supported_io_types": { 00:27:41.066 "read": true, 00:27:41.066 "write": true, 00:27:41.066 "unmap": true, 00:27:41.066 "flush": true, 00:27:41.066 "reset": true, 00:27:41.066 "nvme_admin": false, 00:27:41.066 "nvme_io": false, 00:27:41.066 "nvme_io_md": false, 00:27:41.066 "write_zeroes": true, 00:27:41.066 "zcopy": true, 00:27:41.066 "get_zone_info": false, 00:27:41.066 "zone_management": false, 00:27:41.066 "zone_append": false, 00:27:41.066 "compare": false, 00:27:41.066 "compare_and_write": false, 00:27:41.066 "abort": true, 00:27:41.066 "seek_hole": false, 00:27:41.066 "seek_data": false, 00:27:41.066 "copy": true, 00:27:41.066 "nvme_iov_md": false 00:27:41.066 }, 00:27:41.066 "memory_domains": [ 00:27:41.066 { 00:27:41.066 "dma_device_id": "system", 00:27:41.066 "dma_device_type": 1 00:27:41.066 }, 00:27:41.066 { 00:27:41.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:41.066 "dma_device_type": 2 00:27:41.066 } 00:27:41.066 ], 00:27:41.066 "driver_specific": {} 00:27:41.066 }' 00:27:41.066 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:41.066 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:41.066 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:41.066 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:41.066 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:41.066 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:41.066 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:41.066 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:41.323 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:41.323 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:41.323 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:41.323 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:41.323 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:41.323 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:41.323 00:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:41.582 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:41.582 "name": "BaseBdev2", 00:27:41.582 "aliases": [ 00:27:41.582 "9e281621-1d5c-40f2-ad56-74934f1c303a" 00:27:41.582 ], 00:27:41.582 "product_name": "Malloc disk", 00:27:41.582 "block_size": 512, 00:27:41.582 "num_blocks": 65536, 00:27:41.582 "uuid": "9e281621-1d5c-40f2-ad56-74934f1c303a", 00:27:41.582 "assigned_rate_limits": { 00:27:41.582 "rw_ios_per_sec": 0, 00:27:41.582 "rw_mbytes_per_sec": 0, 00:27:41.582 "r_mbytes_per_sec": 0, 00:27:41.582 "w_mbytes_per_sec": 0 00:27:41.582 }, 00:27:41.582 "claimed": true, 00:27:41.582 "claim_type": "exclusive_write", 00:27:41.582 "zoned": false, 00:27:41.582 "supported_io_types": { 00:27:41.582 "read": true, 00:27:41.582 "write": true, 00:27:41.582 "unmap": true, 00:27:41.582 "flush": true, 00:27:41.582 "reset": true, 00:27:41.582 "nvme_admin": false, 00:27:41.582 "nvme_io": false, 00:27:41.582 "nvme_io_md": false, 00:27:41.582 "write_zeroes": true, 00:27:41.582 "zcopy": true, 00:27:41.582 "get_zone_info": false, 00:27:41.582 "zone_management": false, 00:27:41.582 "zone_append": false, 00:27:41.582 "compare": false, 00:27:41.582 "compare_and_write": false, 00:27:41.582 "abort": true, 00:27:41.582 "seek_hole": false, 00:27:41.582 "seek_data": false, 00:27:41.582 "copy": true, 00:27:41.582 "nvme_iov_md": false 00:27:41.582 }, 00:27:41.582 "memory_domains": [ 00:27:41.582 { 00:27:41.582 "dma_device_id": "system", 00:27:41.582 "dma_device_type": 1 00:27:41.582 }, 00:27:41.582 { 00:27:41.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:41.582 "dma_device_type": 2 00:27:41.582 } 00:27:41.582 ], 00:27:41.582 "driver_specific": {} 00:27:41.582 }' 00:27:41.582 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:41.582 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:41.582 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:41.582 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:41.582 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:41.841 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:41.841 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:41.841 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:41.841 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:41.841 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:41.841 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:41.841 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:41.841 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:41.841 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:41.841 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:42.100 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:42.100 "name": "BaseBdev3", 00:27:42.100 "aliases": [ 00:27:42.100 "349f5615-780c-458a-a634-e52410c6590a" 00:27:42.100 ], 00:27:42.100 "product_name": "Malloc disk", 00:27:42.100 "block_size": 512, 00:27:42.100 "num_blocks": 65536, 00:27:42.100 "uuid": "349f5615-780c-458a-a634-e52410c6590a", 00:27:42.100 "assigned_rate_limits": { 00:27:42.100 "rw_ios_per_sec": 0, 00:27:42.100 "rw_mbytes_per_sec": 0, 00:27:42.100 "r_mbytes_per_sec": 0, 00:27:42.100 "w_mbytes_per_sec": 0 00:27:42.100 }, 00:27:42.100 "claimed": true, 00:27:42.100 "claim_type": "exclusive_write", 00:27:42.100 "zoned": false, 00:27:42.100 "supported_io_types": { 00:27:42.100 "read": true, 00:27:42.100 "write": true, 00:27:42.100 "unmap": true, 00:27:42.100 "flush": true, 00:27:42.100 "reset": true, 00:27:42.100 "nvme_admin": false, 00:27:42.100 "nvme_io": false, 00:27:42.100 "nvme_io_md": false, 00:27:42.100 "write_zeroes": true, 00:27:42.100 "zcopy": true, 00:27:42.100 "get_zone_info": false, 00:27:42.100 "zone_management": false, 00:27:42.100 "zone_append": false, 00:27:42.100 "compare": false, 00:27:42.100 "compare_and_write": false, 00:27:42.100 "abort": true, 00:27:42.100 "seek_hole": false, 00:27:42.100 "seek_data": false, 00:27:42.100 "copy": true, 00:27:42.100 "nvme_iov_md": false 00:27:42.100 }, 00:27:42.100 "memory_domains": [ 00:27:42.100 { 00:27:42.100 "dma_device_id": "system", 00:27:42.100 "dma_device_type": 1 00:27:42.100 }, 00:27:42.100 { 00:27:42.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:42.100 "dma_device_type": 2 00:27:42.100 } 00:27:42.100 ], 00:27:42.100 "driver_specific": {} 00:27:42.100 }' 00:27:42.100 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:42.100 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:42.100 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:42.100 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:42.358 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:42.358 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:42.358 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:42.358 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:42.358 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:42.358 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:42.358 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:42.358 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:42.358 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:42.358 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:27:42.358 00:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:42.617 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:42.617 "name": "BaseBdev4", 00:27:42.617 "aliases": [ 00:27:42.617 "31d27a2f-de92-4ab5-9f20-be5a58e4ce96" 00:27:42.617 ], 00:27:42.617 "product_name": "Malloc disk", 00:27:42.617 "block_size": 512, 00:27:42.617 "num_blocks": 65536, 00:27:42.617 "uuid": "31d27a2f-de92-4ab5-9f20-be5a58e4ce96", 00:27:42.617 "assigned_rate_limits": { 00:27:42.617 "rw_ios_per_sec": 0, 00:27:42.617 "rw_mbytes_per_sec": 0, 00:27:42.617 "r_mbytes_per_sec": 0, 00:27:42.617 "w_mbytes_per_sec": 0 00:27:42.617 }, 00:27:42.617 "claimed": true, 00:27:42.617 "claim_type": "exclusive_write", 00:27:42.617 "zoned": false, 00:27:42.617 "supported_io_types": { 00:27:42.617 "read": true, 00:27:42.617 "write": true, 00:27:42.617 "unmap": true, 00:27:42.617 "flush": true, 00:27:42.617 "reset": true, 00:27:42.617 "nvme_admin": false, 00:27:42.617 "nvme_io": false, 00:27:42.617 "nvme_io_md": false, 00:27:42.617 "write_zeroes": true, 00:27:42.617 "zcopy": true, 00:27:42.617 "get_zone_info": false, 00:27:42.617 "zone_management": false, 00:27:42.617 "zone_append": false, 00:27:42.617 "compare": false, 00:27:42.617 "compare_and_write": false, 00:27:42.617 "abort": true, 00:27:42.617 "seek_hole": false, 00:27:42.617 "seek_data": false, 00:27:42.617 "copy": true, 00:27:42.617 "nvme_iov_md": false 00:27:42.617 }, 00:27:42.617 "memory_domains": [ 00:27:42.617 { 00:27:42.617 "dma_device_id": "system", 00:27:42.617 "dma_device_type": 1 00:27:42.617 }, 00:27:42.617 { 00:27:42.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:42.617 "dma_device_type": 2 00:27:42.617 } 00:27:42.617 ], 00:27:42.617 "driver_specific": {} 00:27:42.617 }' 00:27:42.617 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:42.617 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:42.617 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:42.617 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:42.876 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:42.876 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:42.876 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:42.876 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:42.876 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:42.876 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:42.876 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:42.876 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:42.876 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:43.135 [2024-07-25 00:54:05.764187] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:43.135 [2024-07-25 00:54:05.764334] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:43.135 [2024-07-25 00:54:05.764553] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:43.135 [2024-07-25 00:54:05.764905] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:43.135 [2024-07-25 00:54:05.765004] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:27:43.135 00:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 142266 00:27:43.135 00:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 142266 ']' 00:27:43.136 00:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 142266 00:27:43.136 00:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:27:43.395 00:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:43.395 00:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 142266 00:27:43.395 00:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:43.395 00:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:43.395 00:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 142266' 00:27:43.395 killing process with pid 142266 00:27:43.395 00:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 142266 00:27:43.395 [2024-07-25 00:54:05.818502] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:43.395 00:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 142266 00:27:43.654 [2024-07-25 00:54:06.223807] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:45.030 00:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:27:45.030 00:27:45.030 real 0m32.813s 00:27:45.030 user 0m59.020s 00:27:45.030 sys 0m5.059s 00:27:45.030 00:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:45.030 00:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:45.030 ************************************ 00:27:45.030 END TEST raid_state_function_test_sb 00:27:45.030 ************************************ 00:27:45.030 00:54:07 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:27:45.030 00:54:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:27:45.030 00:54:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.030 00:54:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:45.030 ************************************ 00:27:45.030 START TEST raid_superblock_test 00:27:45.030 ************************************ 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=143348 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 143348 /var/tmp/spdk-raid.sock 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 143348 ']' 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:45.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.030 00:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.289 [2024-07-25 00:54:07.700018] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:27:45.289 [2024-07-25 00:54:07.700365] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143348 ] 00:27:45.289 [2024-07-25 00:54:07.865168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.550 [2024-07-25 00:54:08.117694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.821 [2024-07-25 00:54:08.317164] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:46.119 00:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:46.119 00:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:27:46.119 00:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:27:46.119 00:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:46.119 00:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:27:46.119 00:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:27:46.119 00:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:46.119 00:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:46.119 00:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:46.119 00:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:46.119 00:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:27:46.378 malloc1 00:27:46.378 00:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:46.637 [2024-07-25 00:54:09.069766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:46.637 [2024-07-25 00:54:09.070055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:46.637 [2024-07-25 00:54:09.070131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:27:46.637 [2024-07-25 00:54:09.070241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:46.637 [2024-07-25 00:54:09.072501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:46.637 [2024-07-25 00:54:09.072680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:46.637 pt1 00:27:46.637 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:46.637 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:46.637 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:27:46.638 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:27:46.638 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:46.638 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:46.638 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:46.638 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:46.638 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:27:46.896 malloc2 00:27:46.896 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:47.155 [2024-07-25 00:54:09.633083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:47.155 [2024-07-25 00:54:09.633356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:47.155 [2024-07-25 00:54:09.633494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:47.155 [2024-07-25 00:54:09.633583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:47.155 [2024-07-25 00:54:09.635816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:47.155 [2024-07-25 00:54:09.635965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:47.155 pt2 00:27:47.155 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:47.155 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:47.155 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:27:47.155 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:27:47.155 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:47.155 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:47.155 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:47.155 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:47.155 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:27:47.414 malloc3 00:27:47.414 00:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:47.673 [2024-07-25 00:54:10.095870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:47.673 [2024-07-25 00:54:10.096124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:47.673 [2024-07-25 00:54:10.096207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:47.673 [2024-07-25 00:54:10.096307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:47.673 [2024-07-25 00:54:10.098565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:47.673 [2024-07-25 00:54:10.098750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:47.673 pt3 00:27:47.673 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:47.673 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:47.673 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:27:47.673 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:27:47.673 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:27:47.673 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:47.673 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:47.673 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:47.673 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:27:47.673 malloc4 00:27:47.673 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:47.932 [2024-07-25 00:54:10.557887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:47.932 [2024-07-25 00:54:10.558122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:47.932 [2024-07-25 00:54:10.558189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:47.932 [2024-07-25 00:54:10.558311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:47.932 [2024-07-25 00:54:10.560561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:47.932 [2024-07-25 00:54:10.560729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:47.932 pt4 00:27:47.932 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:47.932 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:47.932 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:27:48.191 [2024-07-25 00:54:10.733943] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:48.191 [2024-07-25 00:54:10.736025] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:48.191 [2024-07-25 00:54:10.736202] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:48.191 [2024-07-25 00:54:10.736306] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:48.191 [2024-07-25 00:54:10.736615] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:27:48.191 [2024-07-25 00:54:10.736725] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:48.191 [2024-07-25 00:54:10.736914] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:48.191 [2024-07-25 00:54:10.737376] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:27:48.191 [2024-07-25 00:54:10.737490] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:27:48.191 [2024-07-25 00:54:10.737714] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.191 00:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.450 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:48.450 "name": "raid_bdev1", 00:27:48.450 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:27:48.450 "strip_size_kb": 0, 00:27:48.450 "state": "online", 00:27:48.450 "raid_level": "raid1", 00:27:48.450 "superblock": true, 00:27:48.450 "num_base_bdevs": 4, 00:27:48.450 "num_base_bdevs_discovered": 4, 00:27:48.450 "num_base_bdevs_operational": 4, 00:27:48.450 "base_bdevs_list": [ 00:27:48.450 { 00:27:48.450 "name": "pt1", 00:27:48.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:48.450 "is_configured": true, 00:27:48.450 "data_offset": 2048, 00:27:48.450 "data_size": 63488 00:27:48.450 }, 00:27:48.450 { 00:27:48.450 "name": "pt2", 00:27:48.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:48.450 "is_configured": true, 00:27:48.450 "data_offset": 2048, 00:27:48.450 "data_size": 63488 00:27:48.450 }, 00:27:48.450 { 00:27:48.450 "name": "pt3", 00:27:48.450 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:48.450 "is_configured": true, 00:27:48.450 "data_offset": 2048, 00:27:48.450 "data_size": 63488 00:27:48.450 }, 00:27:48.450 { 00:27:48.450 "name": "pt4", 00:27:48.450 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:48.450 "is_configured": true, 00:27:48.450 "data_offset": 2048, 00:27:48.450 "data_size": 63488 00:27:48.450 } 00:27:48.450 ] 00:27:48.451 }' 00:27:48.451 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:48.451 00:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.019 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:49.019 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:49.019 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:49.019 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:49.019 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:49.019 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:49.019 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:49.019 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:49.278 [2024-07-25 00:54:11.694372] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:49.278 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:49.278 "name": "raid_bdev1", 00:27:49.278 "aliases": [ 00:27:49.278 "7309bd26-75b8-41ec-a2d9-6c3e5c012312" 00:27:49.278 ], 00:27:49.278 "product_name": "Raid Volume", 00:27:49.278 "block_size": 512, 00:27:49.278 "num_blocks": 63488, 00:27:49.278 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:27:49.278 "assigned_rate_limits": { 00:27:49.278 "rw_ios_per_sec": 0, 00:27:49.278 "rw_mbytes_per_sec": 0, 00:27:49.278 "r_mbytes_per_sec": 0, 00:27:49.278 "w_mbytes_per_sec": 0 00:27:49.278 }, 00:27:49.278 "claimed": false, 00:27:49.278 "zoned": false, 00:27:49.278 "supported_io_types": { 00:27:49.278 "read": true, 00:27:49.278 "write": true, 00:27:49.278 "unmap": false, 00:27:49.278 "flush": false, 00:27:49.278 "reset": true, 00:27:49.278 "nvme_admin": false, 00:27:49.278 "nvme_io": false, 00:27:49.278 "nvme_io_md": false, 00:27:49.278 "write_zeroes": true, 00:27:49.278 "zcopy": false, 00:27:49.278 "get_zone_info": false, 00:27:49.278 "zone_management": false, 00:27:49.278 "zone_append": false, 00:27:49.278 "compare": false, 00:27:49.278 "compare_and_write": false, 00:27:49.278 "abort": false, 00:27:49.278 "seek_hole": false, 00:27:49.278 "seek_data": false, 00:27:49.278 "copy": false, 00:27:49.278 "nvme_iov_md": false 00:27:49.278 }, 00:27:49.278 "memory_domains": [ 00:27:49.278 { 00:27:49.278 "dma_device_id": "system", 00:27:49.278 "dma_device_type": 1 00:27:49.278 }, 00:27:49.278 { 00:27:49.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.278 "dma_device_type": 2 00:27:49.278 }, 00:27:49.278 { 00:27:49.278 "dma_device_id": "system", 00:27:49.278 "dma_device_type": 1 00:27:49.278 }, 00:27:49.278 { 00:27:49.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.278 "dma_device_type": 2 00:27:49.278 }, 00:27:49.278 { 00:27:49.278 "dma_device_id": "system", 00:27:49.278 "dma_device_type": 1 00:27:49.278 }, 00:27:49.278 { 00:27:49.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.278 "dma_device_type": 2 00:27:49.278 }, 00:27:49.278 { 00:27:49.278 "dma_device_id": "system", 00:27:49.278 "dma_device_type": 1 00:27:49.278 }, 00:27:49.278 { 00:27:49.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.279 "dma_device_type": 2 00:27:49.279 } 00:27:49.279 ], 00:27:49.279 "driver_specific": { 00:27:49.279 "raid": { 00:27:49.279 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:27:49.279 "strip_size_kb": 0, 00:27:49.279 "state": "online", 00:27:49.279 "raid_level": "raid1", 00:27:49.279 "superblock": true, 00:27:49.279 "num_base_bdevs": 4, 00:27:49.279 "num_base_bdevs_discovered": 4, 00:27:49.279 "num_base_bdevs_operational": 4, 00:27:49.279 "base_bdevs_list": [ 00:27:49.279 { 00:27:49.279 "name": "pt1", 00:27:49.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:49.279 "is_configured": true, 00:27:49.279 "data_offset": 2048, 00:27:49.279 "data_size": 63488 00:27:49.279 }, 00:27:49.279 { 00:27:49.279 "name": "pt2", 00:27:49.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:49.279 "is_configured": true, 00:27:49.279 "data_offset": 2048, 00:27:49.279 "data_size": 63488 00:27:49.279 }, 00:27:49.279 { 00:27:49.279 "name": "pt3", 00:27:49.279 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:49.279 "is_configured": true, 00:27:49.279 "data_offset": 2048, 00:27:49.279 "data_size": 63488 00:27:49.279 }, 00:27:49.279 { 00:27:49.279 "name": "pt4", 00:27:49.279 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:49.279 "is_configured": true, 00:27:49.279 "data_offset": 2048, 00:27:49.279 "data_size": 63488 00:27:49.279 } 00:27:49.279 ] 00:27:49.279 } 00:27:49.279 } 00:27:49.279 }' 00:27:49.279 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:49.279 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:49.279 pt2 00:27:49.279 pt3 00:27:49.279 pt4' 00:27:49.279 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:49.279 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:49.279 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:49.543 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:49.543 "name": "pt1", 00:27:49.543 "aliases": [ 00:27:49.543 "00000000-0000-0000-0000-000000000001" 00:27:49.543 ], 00:27:49.543 "product_name": "passthru", 00:27:49.543 "block_size": 512, 00:27:49.543 "num_blocks": 65536, 00:27:49.543 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:49.543 "assigned_rate_limits": { 00:27:49.543 "rw_ios_per_sec": 0, 00:27:49.543 "rw_mbytes_per_sec": 0, 00:27:49.543 "r_mbytes_per_sec": 0, 00:27:49.543 "w_mbytes_per_sec": 0 00:27:49.543 }, 00:27:49.543 "claimed": true, 00:27:49.543 "claim_type": "exclusive_write", 00:27:49.543 "zoned": false, 00:27:49.543 "supported_io_types": { 00:27:49.543 "read": true, 00:27:49.543 "write": true, 00:27:49.543 "unmap": true, 00:27:49.543 "flush": true, 00:27:49.543 "reset": true, 00:27:49.543 "nvme_admin": false, 00:27:49.543 "nvme_io": false, 00:27:49.543 "nvme_io_md": false, 00:27:49.543 "write_zeroes": true, 00:27:49.543 "zcopy": true, 00:27:49.543 "get_zone_info": false, 00:27:49.543 "zone_management": false, 00:27:49.543 "zone_append": false, 00:27:49.543 "compare": false, 00:27:49.543 "compare_and_write": false, 00:27:49.543 "abort": true, 00:27:49.543 "seek_hole": false, 00:27:49.543 "seek_data": false, 00:27:49.543 "copy": true, 00:27:49.543 "nvme_iov_md": false 00:27:49.543 }, 00:27:49.543 "memory_domains": [ 00:27:49.543 { 00:27:49.543 "dma_device_id": "system", 00:27:49.543 "dma_device_type": 1 00:27:49.543 }, 00:27:49.543 { 00:27:49.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.543 "dma_device_type": 2 00:27:49.543 } 00:27:49.543 ], 00:27:49.543 "driver_specific": { 00:27:49.543 "passthru": { 00:27:49.543 "name": "pt1", 00:27:49.543 "base_bdev_name": "malloc1" 00:27:49.543 } 00:27:49.543 } 00:27:49.543 }' 00:27:49.543 00:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:49.543 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:49.543 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:49.543 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:49.543 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:49.543 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:49.543 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:49.543 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:49.809 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:49.809 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:49.809 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:49.809 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:49.809 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:49.809 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:49.809 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:50.067 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:50.067 "name": "pt2", 00:27:50.067 "aliases": [ 00:27:50.067 "00000000-0000-0000-0000-000000000002" 00:27:50.067 ], 00:27:50.067 "product_name": "passthru", 00:27:50.067 "block_size": 512, 00:27:50.067 "num_blocks": 65536, 00:27:50.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:50.067 "assigned_rate_limits": { 00:27:50.067 "rw_ios_per_sec": 0, 00:27:50.067 "rw_mbytes_per_sec": 0, 00:27:50.067 "r_mbytes_per_sec": 0, 00:27:50.067 "w_mbytes_per_sec": 0 00:27:50.067 }, 00:27:50.067 "claimed": true, 00:27:50.067 "claim_type": "exclusive_write", 00:27:50.067 "zoned": false, 00:27:50.067 "supported_io_types": { 00:27:50.067 "read": true, 00:27:50.067 "write": true, 00:27:50.067 "unmap": true, 00:27:50.067 "flush": true, 00:27:50.067 "reset": true, 00:27:50.067 "nvme_admin": false, 00:27:50.067 "nvme_io": false, 00:27:50.067 "nvme_io_md": false, 00:27:50.067 "write_zeroes": true, 00:27:50.067 "zcopy": true, 00:27:50.067 "get_zone_info": false, 00:27:50.067 "zone_management": false, 00:27:50.067 "zone_append": false, 00:27:50.067 "compare": false, 00:27:50.067 "compare_and_write": false, 00:27:50.067 "abort": true, 00:27:50.067 "seek_hole": false, 00:27:50.067 "seek_data": false, 00:27:50.067 "copy": true, 00:27:50.067 "nvme_iov_md": false 00:27:50.067 }, 00:27:50.067 "memory_domains": [ 00:27:50.067 { 00:27:50.067 "dma_device_id": "system", 00:27:50.067 "dma_device_type": 1 00:27:50.067 }, 00:27:50.067 { 00:27:50.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.067 "dma_device_type": 2 00:27:50.067 } 00:27:50.067 ], 00:27:50.067 "driver_specific": { 00:27:50.067 "passthru": { 00:27:50.067 "name": "pt2", 00:27:50.067 "base_bdev_name": "malloc2" 00:27:50.067 } 00:27:50.067 } 00:27:50.067 }' 00:27:50.067 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:50.067 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:50.067 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:50.067 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:50.067 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:50.067 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:50.067 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:50.067 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:50.325 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:50.325 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.325 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.325 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:50.325 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:50.325 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:50.325 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:50.584 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:50.584 "name": "pt3", 00:27:50.584 "aliases": [ 00:27:50.584 "00000000-0000-0000-0000-000000000003" 00:27:50.584 ], 00:27:50.584 "product_name": "passthru", 00:27:50.584 "block_size": 512, 00:27:50.584 "num_blocks": 65536, 00:27:50.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:50.584 "assigned_rate_limits": { 00:27:50.584 "rw_ios_per_sec": 0, 00:27:50.584 "rw_mbytes_per_sec": 0, 00:27:50.584 "r_mbytes_per_sec": 0, 00:27:50.584 "w_mbytes_per_sec": 0 00:27:50.584 }, 00:27:50.584 "claimed": true, 00:27:50.584 "claim_type": "exclusive_write", 00:27:50.584 "zoned": false, 00:27:50.584 "supported_io_types": { 00:27:50.584 "read": true, 00:27:50.584 "write": true, 00:27:50.584 "unmap": true, 00:27:50.584 "flush": true, 00:27:50.584 "reset": true, 00:27:50.584 "nvme_admin": false, 00:27:50.584 "nvme_io": false, 00:27:50.584 "nvme_io_md": false, 00:27:50.584 "write_zeroes": true, 00:27:50.584 "zcopy": true, 00:27:50.584 "get_zone_info": false, 00:27:50.584 "zone_management": false, 00:27:50.584 "zone_append": false, 00:27:50.584 "compare": false, 00:27:50.584 "compare_and_write": false, 00:27:50.584 "abort": true, 00:27:50.584 "seek_hole": false, 00:27:50.584 "seek_data": false, 00:27:50.584 "copy": true, 00:27:50.584 "nvme_iov_md": false 00:27:50.584 }, 00:27:50.584 "memory_domains": [ 00:27:50.584 { 00:27:50.584 "dma_device_id": "system", 00:27:50.584 "dma_device_type": 1 00:27:50.584 }, 00:27:50.584 { 00:27:50.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.584 "dma_device_type": 2 00:27:50.584 } 00:27:50.584 ], 00:27:50.584 "driver_specific": { 00:27:50.584 "passthru": { 00:27:50.584 "name": "pt3", 00:27:50.584 "base_bdev_name": "malloc3" 00:27:50.584 } 00:27:50.584 } 00:27:50.584 }' 00:27:50.584 00:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:50.584 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:50.584 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:50.584 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:50.584 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:50.584 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:50.584 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:50.584 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:50.584 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:50.584 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.843 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.843 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:50.843 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:50.844 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:27:50.844 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:51.103 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:51.103 "name": "pt4", 00:27:51.103 "aliases": [ 00:27:51.103 "00000000-0000-0000-0000-000000000004" 00:27:51.103 ], 00:27:51.103 "product_name": "passthru", 00:27:51.103 "block_size": 512, 00:27:51.103 "num_blocks": 65536, 00:27:51.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:51.103 "assigned_rate_limits": { 00:27:51.103 "rw_ios_per_sec": 0, 00:27:51.103 "rw_mbytes_per_sec": 0, 00:27:51.103 "r_mbytes_per_sec": 0, 00:27:51.103 "w_mbytes_per_sec": 0 00:27:51.103 }, 00:27:51.103 "claimed": true, 00:27:51.103 "claim_type": "exclusive_write", 00:27:51.103 "zoned": false, 00:27:51.103 "supported_io_types": { 00:27:51.103 "read": true, 00:27:51.103 "write": true, 00:27:51.103 "unmap": true, 00:27:51.103 "flush": true, 00:27:51.103 "reset": true, 00:27:51.103 "nvme_admin": false, 00:27:51.103 "nvme_io": false, 00:27:51.103 "nvme_io_md": false, 00:27:51.103 "write_zeroes": true, 00:27:51.103 "zcopy": true, 00:27:51.103 "get_zone_info": false, 00:27:51.103 "zone_management": false, 00:27:51.103 "zone_append": false, 00:27:51.103 "compare": false, 00:27:51.103 "compare_and_write": false, 00:27:51.103 "abort": true, 00:27:51.103 "seek_hole": false, 00:27:51.103 "seek_data": false, 00:27:51.103 "copy": true, 00:27:51.103 "nvme_iov_md": false 00:27:51.103 }, 00:27:51.103 "memory_domains": [ 00:27:51.103 { 00:27:51.103 "dma_device_id": "system", 00:27:51.103 "dma_device_type": 1 00:27:51.103 }, 00:27:51.103 { 00:27:51.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.103 "dma_device_type": 2 00:27:51.103 } 00:27:51.103 ], 00:27:51.103 "driver_specific": { 00:27:51.103 "passthru": { 00:27:51.103 "name": "pt4", 00:27:51.103 "base_bdev_name": "malloc4" 00:27:51.103 } 00:27:51.103 } 00:27:51.103 }' 00:27:51.103 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:51.103 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:51.103 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:51.103 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:51.103 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:51.103 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:51.103 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:51.103 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:51.361 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:51.361 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:51.361 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:51.361 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:51.361 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:51.361 00:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:51.619 [2024-07-25 00:54:14.058996] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:51.619 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7309bd26-75b8-41ec-a2d9-6c3e5c012312 00:27:51.619 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 7309bd26-75b8-41ec-a2d9-6c3e5c012312 ']' 00:27:51.619 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:51.877 [2024-07-25 00:54:14.334797] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:51.877 [2024-07-25 00:54:14.334965] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:51.877 [2024-07-25 00:54:14.335174] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:51.877 [2024-07-25 00:54:14.335354] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:51.877 [2024-07-25 00:54:14.335454] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:27:51.877 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.877 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:51.877 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:51.877 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:51.877 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:51.877 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:52.135 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:52.135 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:52.393 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:52.393 00:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:52.651 00:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:52.651 00:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:52.910 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:53.169 [2024-07-25 00:54:15.715012] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:53.169 [2024-07-25 00:54:15.717104] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:53.169 [2024-07-25 00:54:15.717327] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:53.169 [2024-07-25 00:54:15.717400] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:53.169 [2024-07-25 00:54:15.717562] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:53.169 [2024-07-25 00:54:15.717704] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:53.169 [2024-07-25 00:54:15.717828] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:53.169 [2024-07-25 00:54:15.717968] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:27:53.169 [2024-07-25 00:54:15.718092] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:53.169 [2024-07-25 00:54:15.718130] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:27:53.169 request: 00:27:53.169 { 00:27:53.169 "name": "raid_bdev1", 00:27:53.169 "raid_level": "raid1", 00:27:53.169 "base_bdevs": [ 00:27:53.169 "malloc1", 00:27:53.169 "malloc2", 00:27:53.169 "malloc3", 00:27:53.169 "malloc4" 00:27:53.169 ], 00:27:53.169 "superblock": false, 00:27:53.169 "method": "bdev_raid_create", 00:27:53.169 "req_id": 1 00:27:53.169 } 00:27:53.169 Got JSON-RPC error response 00:27:53.169 response: 00:27:53.169 { 00:27:53.169 "code": -17, 00:27:53.169 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:53.169 } 00:27:53.169 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:27:53.169 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:53.169 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:53.169 00:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:53.169 00:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.169 00:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:53.428 00:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:53.428 00:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:53.428 00:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:53.428 [2024-07-25 00:54:16.076288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:53.428 [2024-07-25 00:54:16.076530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.428 [2024-07-25 00:54:16.076599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:53.428 [2024-07-25 00:54:16.076721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.428 [2024-07-25 00:54:16.079008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.428 [2024-07-25 00:54:16.079168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:53.428 [2024-07-25 00:54:16.079381] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:53.428 [2024-07-25 00:54:16.079529] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:53.686 pt1 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.686 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.945 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:53.945 "name": "raid_bdev1", 00:27:53.945 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:27:53.945 "strip_size_kb": 0, 00:27:53.945 "state": "configuring", 00:27:53.945 "raid_level": "raid1", 00:27:53.945 "superblock": true, 00:27:53.945 "num_base_bdevs": 4, 00:27:53.945 "num_base_bdevs_discovered": 1, 00:27:53.945 "num_base_bdevs_operational": 4, 00:27:53.945 "base_bdevs_list": [ 00:27:53.945 { 00:27:53.945 "name": "pt1", 00:27:53.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:53.945 "is_configured": true, 00:27:53.945 "data_offset": 2048, 00:27:53.945 "data_size": 63488 00:27:53.945 }, 00:27:53.945 { 00:27:53.945 "name": null, 00:27:53.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:53.945 "is_configured": false, 00:27:53.945 "data_offset": 2048, 00:27:53.945 "data_size": 63488 00:27:53.945 }, 00:27:53.945 { 00:27:53.945 "name": null, 00:27:53.945 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:53.945 "is_configured": false, 00:27:53.945 "data_offset": 2048, 00:27:53.945 "data_size": 63488 00:27:53.945 }, 00:27:53.945 { 00:27:53.945 "name": null, 00:27:53.945 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:53.945 "is_configured": false, 00:27:53.945 "data_offset": 2048, 00:27:53.945 "data_size": 63488 00:27:53.945 } 00:27:53.945 ] 00:27:53.945 }' 00:27:53.945 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:53.945 00:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.512 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:27:54.512 00:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:54.512 [2024-07-25 00:54:17.108488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:54.512 [2024-07-25 00:54:17.108737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:54.512 [2024-07-25 00:54:17.108828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:54.512 [2024-07-25 00:54:17.108954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:54.512 [2024-07-25 00:54:17.109497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:54.512 [2024-07-25 00:54:17.109655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:54.512 [2024-07-25 00:54:17.109881] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:54.512 [2024-07-25 00:54:17.110020] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:54.512 pt2 00:27:54.512 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:54.771 [2024-07-25 00:54:17.372548] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.771 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.030 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:55.030 "name": "raid_bdev1", 00:27:55.030 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:27:55.030 "strip_size_kb": 0, 00:27:55.030 "state": "configuring", 00:27:55.030 "raid_level": "raid1", 00:27:55.030 "superblock": true, 00:27:55.030 "num_base_bdevs": 4, 00:27:55.030 "num_base_bdevs_discovered": 1, 00:27:55.030 "num_base_bdevs_operational": 4, 00:27:55.030 "base_bdevs_list": [ 00:27:55.030 { 00:27:55.030 "name": "pt1", 00:27:55.030 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:55.030 "is_configured": true, 00:27:55.030 "data_offset": 2048, 00:27:55.030 "data_size": 63488 00:27:55.030 }, 00:27:55.030 { 00:27:55.030 "name": null, 00:27:55.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:55.030 "is_configured": false, 00:27:55.030 "data_offset": 2048, 00:27:55.030 "data_size": 63488 00:27:55.030 }, 00:27:55.030 { 00:27:55.030 "name": null, 00:27:55.030 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:55.030 "is_configured": false, 00:27:55.030 "data_offset": 2048, 00:27:55.030 "data_size": 63488 00:27:55.030 }, 00:27:55.030 { 00:27:55.030 "name": null, 00:27:55.030 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:55.030 "is_configured": false, 00:27:55.030 "data_offset": 2048, 00:27:55.030 "data_size": 63488 00:27:55.030 } 00:27:55.030 ] 00:27:55.030 }' 00:27:55.030 00:54:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:55.030 00:54:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.597 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:55.598 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:55.598 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:55.856 [2024-07-25 00:54:18.424716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:55.856 [2024-07-25 00:54:18.424968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:55.856 [2024-07-25 00:54:18.425039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:55.856 [2024-07-25 00:54:18.425163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:55.856 [2024-07-25 00:54:18.425663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:55.856 [2024-07-25 00:54:18.425829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:55.856 [2024-07-25 00:54:18.426035] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:55.856 [2024-07-25 00:54:18.426181] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:55.856 pt2 00:27:55.856 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:55.856 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:55.856 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:56.115 [2024-07-25 00:54:18.684808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:56.115 [2024-07-25 00:54:18.685031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:56.115 [2024-07-25 00:54:18.685093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:56.115 [2024-07-25 00:54:18.685227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:56.115 [2024-07-25 00:54:18.685701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:56.115 [2024-07-25 00:54:18.685854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:56.115 [2024-07-25 00:54:18.686050] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:56.115 [2024-07-25 00:54:18.686163] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:56.115 pt3 00:27:56.115 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:56.115 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:56.115 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:56.374 [2024-07-25 00:54:18.932801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:56.374 [2024-07-25 00:54:18.933032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:56.374 [2024-07-25 00:54:18.933093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:27:56.374 [2024-07-25 00:54:18.933221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:56.374 [2024-07-25 00:54:18.933764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:56.374 [2024-07-25 00:54:18.933925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:56.374 [2024-07-25 00:54:18.934129] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:56.374 [2024-07-25 00:54:18.934245] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:56.374 [2024-07-25 00:54:18.934430] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:27:56.374 [2024-07-25 00:54:18.934531] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:56.374 [2024-07-25 00:54:18.934656] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:27:56.374 [2024-07-25 00:54:18.935053] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:27:56.374 [2024-07-25 00:54:18.935167] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:27:56.374 [2024-07-25 00:54:18.935396] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.374 pt4 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.374 00:54:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.633 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:56.633 "name": "raid_bdev1", 00:27:56.633 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:27:56.633 "strip_size_kb": 0, 00:27:56.633 "state": "online", 00:27:56.633 "raid_level": "raid1", 00:27:56.633 "superblock": true, 00:27:56.633 "num_base_bdevs": 4, 00:27:56.633 "num_base_bdevs_discovered": 4, 00:27:56.633 "num_base_bdevs_operational": 4, 00:27:56.633 "base_bdevs_list": [ 00:27:56.633 { 00:27:56.633 "name": "pt1", 00:27:56.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:56.633 "is_configured": true, 00:27:56.633 "data_offset": 2048, 00:27:56.633 "data_size": 63488 00:27:56.633 }, 00:27:56.633 { 00:27:56.633 "name": "pt2", 00:27:56.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:56.633 "is_configured": true, 00:27:56.633 "data_offset": 2048, 00:27:56.633 "data_size": 63488 00:27:56.633 }, 00:27:56.633 { 00:27:56.633 "name": "pt3", 00:27:56.633 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:56.633 "is_configured": true, 00:27:56.633 "data_offset": 2048, 00:27:56.633 "data_size": 63488 00:27:56.633 }, 00:27:56.633 { 00:27:56.633 "name": "pt4", 00:27:56.633 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:56.633 "is_configured": true, 00:27:56.633 "data_offset": 2048, 00:27:56.633 "data_size": 63488 00:27:56.633 } 00:27:56.633 ] 00:27:56.633 }' 00:27:56.633 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:56.633 00:54:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.200 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:57.200 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:57.200 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:57.200 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:57.200 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:57.200 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:57.200 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:57.200 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:57.459 [2024-07-25 00:54:19.953292] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:57.459 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:57.459 "name": "raid_bdev1", 00:27:57.459 "aliases": [ 00:27:57.459 "7309bd26-75b8-41ec-a2d9-6c3e5c012312" 00:27:57.459 ], 00:27:57.459 "product_name": "Raid Volume", 00:27:57.459 "block_size": 512, 00:27:57.459 "num_blocks": 63488, 00:27:57.459 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:27:57.459 "assigned_rate_limits": { 00:27:57.460 "rw_ios_per_sec": 0, 00:27:57.460 "rw_mbytes_per_sec": 0, 00:27:57.460 "r_mbytes_per_sec": 0, 00:27:57.460 "w_mbytes_per_sec": 0 00:27:57.460 }, 00:27:57.460 "claimed": false, 00:27:57.460 "zoned": false, 00:27:57.460 "supported_io_types": { 00:27:57.460 "read": true, 00:27:57.460 "write": true, 00:27:57.460 "unmap": false, 00:27:57.460 "flush": false, 00:27:57.460 "reset": true, 00:27:57.460 "nvme_admin": false, 00:27:57.460 "nvme_io": false, 00:27:57.460 "nvme_io_md": false, 00:27:57.460 "write_zeroes": true, 00:27:57.460 "zcopy": false, 00:27:57.460 "get_zone_info": false, 00:27:57.460 "zone_management": false, 00:27:57.460 "zone_append": false, 00:27:57.460 "compare": false, 00:27:57.460 "compare_and_write": false, 00:27:57.460 "abort": false, 00:27:57.460 "seek_hole": false, 00:27:57.460 "seek_data": false, 00:27:57.460 "copy": false, 00:27:57.460 "nvme_iov_md": false 00:27:57.460 }, 00:27:57.460 "memory_domains": [ 00:27:57.460 { 00:27:57.460 "dma_device_id": "system", 00:27:57.460 "dma_device_type": 1 00:27:57.460 }, 00:27:57.460 { 00:27:57.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.460 "dma_device_type": 2 00:27:57.460 }, 00:27:57.460 { 00:27:57.460 "dma_device_id": "system", 00:27:57.460 "dma_device_type": 1 00:27:57.460 }, 00:27:57.460 { 00:27:57.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.460 "dma_device_type": 2 00:27:57.460 }, 00:27:57.460 { 00:27:57.460 "dma_device_id": "system", 00:27:57.460 "dma_device_type": 1 00:27:57.460 }, 00:27:57.460 { 00:27:57.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.460 "dma_device_type": 2 00:27:57.460 }, 00:27:57.460 { 00:27:57.460 "dma_device_id": "system", 00:27:57.460 "dma_device_type": 1 00:27:57.460 }, 00:27:57.460 { 00:27:57.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.460 "dma_device_type": 2 00:27:57.460 } 00:27:57.460 ], 00:27:57.460 "driver_specific": { 00:27:57.460 "raid": { 00:27:57.460 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:27:57.460 "strip_size_kb": 0, 00:27:57.460 "state": "online", 00:27:57.460 "raid_level": "raid1", 00:27:57.460 "superblock": true, 00:27:57.460 "num_base_bdevs": 4, 00:27:57.460 "num_base_bdevs_discovered": 4, 00:27:57.460 "num_base_bdevs_operational": 4, 00:27:57.460 "base_bdevs_list": [ 00:27:57.460 { 00:27:57.460 "name": "pt1", 00:27:57.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:57.460 "is_configured": true, 00:27:57.460 "data_offset": 2048, 00:27:57.460 "data_size": 63488 00:27:57.460 }, 00:27:57.460 { 00:27:57.460 "name": "pt2", 00:27:57.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:57.460 "is_configured": true, 00:27:57.460 "data_offset": 2048, 00:27:57.460 "data_size": 63488 00:27:57.460 }, 00:27:57.460 { 00:27:57.460 "name": "pt3", 00:27:57.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:57.460 "is_configured": true, 00:27:57.460 "data_offset": 2048, 00:27:57.460 "data_size": 63488 00:27:57.460 }, 00:27:57.460 { 00:27:57.460 "name": "pt4", 00:27:57.460 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:57.460 "is_configured": true, 00:27:57.460 "data_offset": 2048, 00:27:57.460 "data_size": 63488 00:27:57.460 } 00:27:57.460 ] 00:27:57.460 } 00:27:57.460 } 00:27:57.460 }' 00:27:57.460 00:54:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:57.460 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:57.460 pt2 00:27:57.460 pt3 00:27:57.460 pt4' 00:27:57.460 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:57.460 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:57.460 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:57.719 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:57.719 "name": "pt1", 00:27:57.719 "aliases": [ 00:27:57.719 "00000000-0000-0000-0000-000000000001" 00:27:57.719 ], 00:27:57.719 "product_name": "passthru", 00:27:57.719 "block_size": 512, 00:27:57.719 "num_blocks": 65536, 00:27:57.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:57.719 "assigned_rate_limits": { 00:27:57.719 "rw_ios_per_sec": 0, 00:27:57.719 "rw_mbytes_per_sec": 0, 00:27:57.719 "r_mbytes_per_sec": 0, 00:27:57.719 "w_mbytes_per_sec": 0 00:27:57.719 }, 00:27:57.719 "claimed": true, 00:27:57.719 "claim_type": "exclusive_write", 00:27:57.719 "zoned": false, 00:27:57.719 "supported_io_types": { 00:27:57.719 "read": true, 00:27:57.719 "write": true, 00:27:57.719 "unmap": true, 00:27:57.719 "flush": true, 00:27:57.719 "reset": true, 00:27:57.719 "nvme_admin": false, 00:27:57.719 "nvme_io": false, 00:27:57.719 "nvme_io_md": false, 00:27:57.719 "write_zeroes": true, 00:27:57.719 "zcopy": true, 00:27:57.719 "get_zone_info": false, 00:27:57.719 "zone_management": false, 00:27:57.719 "zone_append": false, 00:27:57.719 "compare": false, 00:27:57.719 "compare_and_write": false, 00:27:57.719 "abort": true, 00:27:57.719 "seek_hole": false, 00:27:57.719 "seek_data": false, 00:27:57.719 "copy": true, 00:27:57.719 "nvme_iov_md": false 00:27:57.719 }, 00:27:57.719 "memory_domains": [ 00:27:57.719 { 00:27:57.719 "dma_device_id": "system", 00:27:57.719 "dma_device_type": 1 00:27:57.719 }, 00:27:57.719 { 00:27:57.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.719 "dma_device_type": 2 00:27:57.719 } 00:27:57.719 ], 00:27:57.719 "driver_specific": { 00:27:57.719 "passthru": { 00:27:57.719 "name": "pt1", 00:27:57.719 "base_bdev_name": "malloc1" 00:27:57.719 } 00:27:57.719 } 00:27:57.719 }' 00:27:57.719 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:57.719 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:57.719 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:57.719 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:57.719 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:57.978 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:57.978 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:57.978 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:57.978 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:57.978 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:57.978 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:57.978 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:57.978 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:57.978 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:57.978 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:58.236 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:58.236 "name": "pt2", 00:27:58.236 "aliases": [ 00:27:58.236 "00000000-0000-0000-0000-000000000002" 00:27:58.236 ], 00:27:58.236 "product_name": "passthru", 00:27:58.236 "block_size": 512, 00:27:58.236 "num_blocks": 65536, 00:27:58.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:58.236 "assigned_rate_limits": { 00:27:58.236 "rw_ios_per_sec": 0, 00:27:58.236 "rw_mbytes_per_sec": 0, 00:27:58.236 "r_mbytes_per_sec": 0, 00:27:58.236 "w_mbytes_per_sec": 0 00:27:58.236 }, 00:27:58.236 "claimed": true, 00:27:58.236 "claim_type": "exclusive_write", 00:27:58.236 "zoned": false, 00:27:58.236 "supported_io_types": { 00:27:58.236 "read": true, 00:27:58.236 "write": true, 00:27:58.236 "unmap": true, 00:27:58.236 "flush": true, 00:27:58.236 "reset": true, 00:27:58.236 "nvme_admin": false, 00:27:58.236 "nvme_io": false, 00:27:58.236 "nvme_io_md": false, 00:27:58.236 "write_zeroes": true, 00:27:58.236 "zcopy": true, 00:27:58.236 "get_zone_info": false, 00:27:58.236 "zone_management": false, 00:27:58.236 "zone_append": false, 00:27:58.236 "compare": false, 00:27:58.236 "compare_and_write": false, 00:27:58.236 "abort": true, 00:27:58.236 "seek_hole": false, 00:27:58.236 "seek_data": false, 00:27:58.236 "copy": true, 00:27:58.236 "nvme_iov_md": false 00:27:58.236 }, 00:27:58.236 "memory_domains": [ 00:27:58.236 { 00:27:58.236 "dma_device_id": "system", 00:27:58.236 "dma_device_type": 1 00:27:58.236 }, 00:27:58.236 { 00:27:58.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.236 "dma_device_type": 2 00:27:58.236 } 00:27:58.236 ], 00:27:58.236 "driver_specific": { 00:27:58.236 "passthru": { 00:27:58.236 "name": "pt2", 00:27:58.236 "base_bdev_name": "malloc2" 00:27:58.236 } 00:27:58.236 } 00:27:58.236 }' 00:27:58.236 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:58.495 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:58.495 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:58.495 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:58.495 00:54:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:58.495 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:58.495 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:58.495 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:58.495 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:58.495 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:58.754 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:58.754 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:58.754 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:58.754 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:58.754 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:59.013 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:59.013 "name": "pt3", 00:27:59.013 "aliases": [ 00:27:59.013 "00000000-0000-0000-0000-000000000003" 00:27:59.013 ], 00:27:59.013 "product_name": "passthru", 00:27:59.013 "block_size": 512, 00:27:59.013 "num_blocks": 65536, 00:27:59.013 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:59.013 "assigned_rate_limits": { 00:27:59.013 "rw_ios_per_sec": 0, 00:27:59.013 "rw_mbytes_per_sec": 0, 00:27:59.013 "r_mbytes_per_sec": 0, 00:27:59.013 "w_mbytes_per_sec": 0 00:27:59.013 }, 00:27:59.013 "claimed": true, 00:27:59.013 "claim_type": "exclusive_write", 00:27:59.013 "zoned": false, 00:27:59.013 "supported_io_types": { 00:27:59.013 "read": true, 00:27:59.013 "write": true, 00:27:59.013 "unmap": true, 00:27:59.013 "flush": true, 00:27:59.013 "reset": true, 00:27:59.013 "nvme_admin": false, 00:27:59.013 "nvme_io": false, 00:27:59.013 "nvme_io_md": false, 00:27:59.013 "write_zeroes": true, 00:27:59.013 "zcopy": true, 00:27:59.013 "get_zone_info": false, 00:27:59.013 "zone_management": false, 00:27:59.013 "zone_append": false, 00:27:59.013 "compare": false, 00:27:59.013 "compare_and_write": false, 00:27:59.013 "abort": true, 00:27:59.013 "seek_hole": false, 00:27:59.013 "seek_data": false, 00:27:59.013 "copy": true, 00:27:59.013 "nvme_iov_md": false 00:27:59.013 }, 00:27:59.013 "memory_domains": [ 00:27:59.013 { 00:27:59.013 "dma_device_id": "system", 00:27:59.013 "dma_device_type": 1 00:27:59.013 }, 00:27:59.013 { 00:27:59.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.013 "dma_device_type": 2 00:27:59.013 } 00:27:59.013 ], 00:27:59.013 "driver_specific": { 00:27:59.013 "passthru": { 00:27:59.013 "name": "pt3", 00:27:59.013 "base_bdev_name": "malloc3" 00:27:59.013 } 00:27:59.013 } 00:27:59.013 }' 00:27:59.013 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:59.013 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:59.013 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:59.013 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:59.013 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:59.013 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:59.013 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:59.272 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:59.272 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:59.272 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:59.272 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:59.272 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:59.272 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:59.272 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:27:59.272 00:54:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:59.532 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:59.532 "name": "pt4", 00:27:59.532 "aliases": [ 00:27:59.532 "00000000-0000-0000-0000-000000000004" 00:27:59.532 ], 00:27:59.532 "product_name": "passthru", 00:27:59.532 "block_size": 512, 00:27:59.532 "num_blocks": 65536, 00:27:59.532 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:59.532 "assigned_rate_limits": { 00:27:59.532 "rw_ios_per_sec": 0, 00:27:59.532 "rw_mbytes_per_sec": 0, 00:27:59.532 "r_mbytes_per_sec": 0, 00:27:59.532 "w_mbytes_per_sec": 0 00:27:59.532 }, 00:27:59.532 "claimed": true, 00:27:59.532 "claim_type": "exclusive_write", 00:27:59.532 "zoned": false, 00:27:59.532 "supported_io_types": { 00:27:59.532 "read": true, 00:27:59.532 "write": true, 00:27:59.532 "unmap": true, 00:27:59.532 "flush": true, 00:27:59.532 "reset": true, 00:27:59.532 "nvme_admin": false, 00:27:59.532 "nvme_io": false, 00:27:59.532 "nvme_io_md": false, 00:27:59.532 "write_zeroes": true, 00:27:59.532 "zcopy": true, 00:27:59.532 "get_zone_info": false, 00:27:59.532 "zone_management": false, 00:27:59.532 "zone_append": false, 00:27:59.532 "compare": false, 00:27:59.532 "compare_and_write": false, 00:27:59.532 "abort": true, 00:27:59.532 "seek_hole": false, 00:27:59.532 "seek_data": false, 00:27:59.532 "copy": true, 00:27:59.532 "nvme_iov_md": false 00:27:59.532 }, 00:27:59.532 "memory_domains": [ 00:27:59.532 { 00:27:59.532 "dma_device_id": "system", 00:27:59.532 "dma_device_type": 1 00:27:59.532 }, 00:27:59.532 { 00:27:59.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.532 "dma_device_type": 2 00:27:59.532 } 00:27:59.532 ], 00:27:59.532 "driver_specific": { 00:27:59.532 "passthru": { 00:27:59.532 "name": "pt4", 00:27:59.532 "base_bdev_name": "malloc4" 00:27:59.532 } 00:27:59.532 } 00:27:59.532 }' 00:27:59.532 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:59.532 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:59.791 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:59.791 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:59.791 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:59.791 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:59.791 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:59.791 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:59.791 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:59.791 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:59.791 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:00.050 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:00.050 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:00.050 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:28:00.310 [2024-07-25 00:54:22.721782] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 7309bd26-75b8-41ec-a2d9-6c3e5c012312 '!=' 7309bd26-75b8-41ec-a2d9-6c3e5c012312 ']' 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:00.310 [2024-07-25 00:54:22.905625] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.310 00:54:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.570 00:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.570 "name": "raid_bdev1", 00:28:00.570 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:28:00.570 "strip_size_kb": 0, 00:28:00.570 "state": "online", 00:28:00.570 "raid_level": "raid1", 00:28:00.570 "superblock": true, 00:28:00.570 "num_base_bdevs": 4, 00:28:00.570 "num_base_bdevs_discovered": 3, 00:28:00.570 "num_base_bdevs_operational": 3, 00:28:00.570 "base_bdevs_list": [ 00:28:00.570 { 00:28:00.570 "name": null, 00:28:00.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.570 "is_configured": false, 00:28:00.570 "data_offset": 2048, 00:28:00.570 "data_size": 63488 00:28:00.570 }, 00:28:00.570 { 00:28:00.570 "name": "pt2", 00:28:00.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:00.570 "is_configured": true, 00:28:00.570 "data_offset": 2048, 00:28:00.570 "data_size": 63488 00:28:00.570 }, 00:28:00.570 { 00:28:00.570 "name": "pt3", 00:28:00.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:00.570 "is_configured": true, 00:28:00.570 "data_offset": 2048, 00:28:00.570 "data_size": 63488 00:28:00.570 }, 00:28:00.570 { 00:28:00.570 "name": "pt4", 00:28:00.570 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:00.570 "is_configured": true, 00:28:00.570 "data_offset": 2048, 00:28:00.570 "data_size": 63488 00:28:00.570 } 00:28:00.570 ] 00:28:00.570 }' 00:28:00.570 00:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.570 00:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.138 00:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:01.398 [2024-07-25 00:54:23.845749] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:01.398 [2024-07-25 00:54:23.845924] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:01.398 [2024-07-25 00:54:23.846124] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:01.398 [2024-07-25 00:54:23.846305] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:01.398 [2024-07-25 00:54:23.846398] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:28:01.398 00:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:28:01.398 00:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.657 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:28:01.657 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:28:01.657 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:28:01.657 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:01.657 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:01.917 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:28:01.917 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:01.917 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:02.178 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:28:02.178 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:02.178 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:02.178 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:28:02.178 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:02.178 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:28:02.178 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:28:02.178 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:02.436 [2024-07-25 00:54:24.957908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:02.436 [2024-07-25 00:54:24.958159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:02.436 [2024-07-25 00:54:24.958225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:28:02.436 [2024-07-25 00:54:24.958378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:02.436 [2024-07-25 00:54:24.960661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:02.436 [2024-07-25 00:54:24.960830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:02.436 [2024-07-25 00:54:24.961065] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:02.436 [2024-07-25 00:54:24.961224] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:02.436 pt2 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.436 00:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.696 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:02.696 "name": "raid_bdev1", 00:28:02.696 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:28:02.696 "strip_size_kb": 0, 00:28:02.696 "state": "configuring", 00:28:02.696 "raid_level": "raid1", 00:28:02.696 "superblock": true, 00:28:02.696 "num_base_bdevs": 4, 00:28:02.696 "num_base_bdevs_discovered": 1, 00:28:02.696 "num_base_bdevs_operational": 3, 00:28:02.696 "base_bdevs_list": [ 00:28:02.696 { 00:28:02.696 "name": null, 00:28:02.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.696 "is_configured": false, 00:28:02.696 "data_offset": 2048, 00:28:02.696 "data_size": 63488 00:28:02.696 }, 00:28:02.696 { 00:28:02.696 "name": "pt2", 00:28:02.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:02.696 "is_configured": true, 00:28:02.696 "data_offset": 2048, 00:28:02.696 "data_size": 63488 00:28:02.696 }, 00:28:02.696 { 00:28:02.696 "name": null, 00:28:02.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:02.696 "is_configured": false, 00:28:02.696 "data_offset": 2048, 00:28:02.696 "data_size": 63488 00:28:02.696 }, 00:28:02.696 { 00:28:02.696 "name": null, 00:28:02.696 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:02.696 "is_configured": false, 00:28:02.697 "data_offset": 2048, 00:28:02.697 "data_size": 63488 00:28:02.697 } 00:28:02.697 ] 00:28:02.697 }' 00:28:02.697 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:02.697 00:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.266 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:28:03.266 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:03.267 [2024-07-25 00:54:25.890056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:03.267 [2024-07-25 00:54:25.890309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:03.267 [2024-07-25 00:54:25.890389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:28:03.267 [2024-07-25 00:54:25.890666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:03.267 [2024-07-25 00:54:25.891160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:03.267 [2024-07-25 00:54:25.891316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:03.267 [2024-07-25 00:54:25.891536] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:03.267 [2024-07-25 00:54:25.891642] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:03.267 pt3 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.267 00:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.526 00:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.526 "name": "raid_bdev1", 00:28:03.526 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:28:03.526 "strip_size_kb": 0, 00:28:03.526 "state": "configuring", 00:28:03.526 "raid_level": "raid1", 00:28:03.526 "superblock": true, 00:28:03.526 "num_base_bdevs": 4, 00:28:03.526 "num_base_bdevs_discovered": 2, 00:28:03.526 "num_base_bdevs_operational": 3, 00:28:03.526 "base_bdevs_list": [ 00:28:03.526 { 00:28:03.526 "name": null, 00:28:03.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.526 "is_configured": false, 00:28:03.527 "data_offset": 2048, 00:28:03.527 "data_size": 63488 00:28:03.527 }, 00:28:03.527 { 00:28:03.527 "name": "pt2", 00:28:03.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:03.527 "is_configured": true, 00:28:03.527 "data_offset": 2048, 00:28:03.527 "data_size": 63488 00:28:03.527 }, 00:28:03.527 { 00:28:03.527 "name": "pt3", 00:28:03.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:03.527 "is_configured": true, 00:28:03.527 "data_offset": 2048, 00:28:03.527 "data_size": 63488 00:28:03.527 }, 00:28:03.527 { 00:28:03.527 "name": null, 00:28:03.527 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:03.527 "is_configured": false, 00:28:03.527 "data_offset": 2048, 00:28:03.527 "data_size": 63488 00:28:03.527 } 00:28:03.527 ] 00:28:03.527 }' 00:28:03.527 00:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.527 00:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.464 00:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:28:04.464 00:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:28:04.464 00:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:28:04.464 00:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:04.464 [2024-07-25 00:54:27.018298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:04.464 [2024-07-25 00:54:27.018536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.464 [2024-07-25 00:54:27.018624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:28:04.464 [2024-07-25 00:54:27.018730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.464 [2024-07-25 00:54:27.019221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.464 [2024-07-25 00:54:27.019370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:04.464 [2024-07-25 00:54:27.019599] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:04.464 [2024-07-25 00:54:27.019708] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:04.464 [2024-07-25 00:54:27.019972] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:28:04.464 [2024-07-25 00:54:27.020085] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:04.464 [2024-07-25 00:54:27.020223] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:28:04.464 [2024-07-25 00:54:27.020701] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:28:04.464 [2024-07-25 00:54:27.020827] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:28:04.464 [2024-07-25 00:54:27.021064] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.464 pt4 00:28:04.464 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:04.464 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:04.464 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:04.464 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:04.464 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:04.464 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:04.464 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.464 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.464 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.464 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.465 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.465 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.725 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:04.725 "name": "raid_bdev1", 00:28:04.725 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:28:04.725 "strip_size_kb": 0, 00:28:04.725 "state": "online", 00:28:04.725 "raid_level": "raid1", 00:28:04.725 "superblock": true, 00:28:04.725 "num_base_bdevs": 4, 00:28:04.725 "num_base_bdevs_discovered": 3, 00:28:04.725 "num_base_bdevs_operational": 3, 00:28:04.725 "base_bdevs_list": [ 00:28:04.725 { 00:28:04.725 "name": null, 00:28:04.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.725 "is_configured": false, 00:28:04.725 "data_offset": 2048, 00:28:04.725 "data_size": 63488 00:28:04.725 }, 00:28:04.725 { 00:28:04.725 "name": "pt2", 00:28:04.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:04.725 "is_configured": true, 00:28:04.725 "data_offset": 2048, 00:28:04.725 "data_size": 63488 00:28:04.725 }, 00:28:04.725 { 00:28:04.725 "name": "pt3", 00:28:04.725 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:04.725 "is_configured": true, 00:28:04.725 "data_offset": 2048, 00:28:04.725 "data_size": 63488 00:28:04.725 }, 00:28:04.725 { 00:28:04.725 "name": "pt4", 00:28:04.725 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:04.725 "is_configured": true, 00:28:04.725 "data_offset": 2048, 00:28:04.725 "data_size": 63488 00:28:04.725 } 00:28:04.725 ] 00:28:04.725 }' 00:28:04.725 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:04.725 00:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.294 00:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:05.553 [2024-07-25 00:54:28.114773] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:05.553 [2024-07-25 00:54:28.114931] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:05.553 [2024-07-25 00:54:28.115153] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:05.553 [2024-07-25 00:54:28.115314] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:05.553 [2024-07-25 00:54:28.115396] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:28:05.553 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.553 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:28:05.811 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:28:05.811 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:28:05.811 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:28:05.811 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:28:05.811 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:06.069 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:06.327 [2024-07-25 00:54:28.734844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:06.327 [2024-07-25 00:54:28.735132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:06.327 [2024-07-25 00:54:28.735203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:28:06.327 [2024-07-25 00:54:28.735329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:06.327 [2024-07-25 00:54:28.737708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:06.327 [2024-07-25 00:54:28.737878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:06.327 [2024-07-25 00:54:28.738064] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:06.327 [2024-07-25 00:54:28.738186] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:06.327 [2024-07-25 00:54:28.738426] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:06.327 [2024-07-25 00:54:28.738532] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:06.327 [2024-07-25 00:54:28.738586] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:28:06.327 [2024-07-25 00:54:28.738716] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:06.327 [2024-07-25 00:54:28.738916] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:06.327 pt1 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:06.327 "name": "raid_bdev1", 00:28:06.327 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:28:06.327 "strip_size_kb": 0, 00:28:06.327 "state": "configuring", 00:28:06.327 "raid_level": "raid1", 00:28:06.327 "superblock": true, 00:28:06.327 "num_base_bdevs": 4, 00:28:06.327 "num_base_bdevs_discovered": 2, 00:28:06.327 "num_base_bdevs_operational": 3, 00:28:06.327 "base_bdevs_list": [ 00:28:06.327 { 00:28:06.327 "name": null, 00:28:06.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.327 "is_configured": false, 00:28:06.327 "data_offset": 2048, 00:28:06.327 "data_size": 63488 00:28:06.327 }, 00:28:06.327 { 00:28:06.327 "name": "pt2", 00:28:06.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:06.327 "is_configured": true, 00:28:06.327 "data_offset": 2048, 00:28:06.327 "data_size": 63488 00:28:06.327 }, 00:28:06.327 { 00:28:06.327 "name": "pt3", 00:28:06.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:06.327 "is_configured": true, 00:28:06.327 "data_offset": 2048, 00:28:06.327 "data_size": 63488 00:28:06.327 }, 00:28:06.327 { 00:28:06.327 "name": null, 00:28:06.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:06.327 "is_configured": false, 00:28:06.327 "data_offset": 2048, 00:28:06.327 "data_size": 63488 00:28:06.327 } 00:28:06.327 ] 00:28:06.327 }' 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:06.327 00:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.894 00:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:28:06.894 00:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:07.152 00:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:28:07.152 00:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:07.411 [2024-07-25 00:54:30.039245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:07.411 [2024-07-25 00:54:30.039464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:07.411 [2024-07-25 00:54:30.039526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:28:07.411 [2024-07-25 00:54:30.039650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:07.411 [2024-07-25 00:54:30.040139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:07.411 [2024-07-25 00:54:30.040280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:07.411 [2024-07-25 00:54:30.040478] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:07.411 [2024-07-25 00:54:30.040601] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:07.411 [2024-07-25 00:54:30.040767] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:28:07.411 [2024-07-25 00:54:30.040853] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:07.411 [2024-07-25 00:54:30.040985] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:28:07.411 [2024-07-25 00:54:30.041587] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:28:07.411 [2024-07-25 00:54:30.041696] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:28:07.411 [2024-07-25 00:54:30.041910] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:07.411 pt4 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.411 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.670 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:07.670 "name": "raid_bdev1", 00:28:07.670 "uuid": "7309bd26-75b8-41ec-a2d9-6c3e5c012312", 00:28:07.670 "strip_size_kb": 0, 00:28:07.670 "state": "online", 00:28:07.670 "raid_level": "raid1", 00:28:07.670 "superblock": true, 00:28:07.670 "num_base_bdevs": 4, 00:28:07.670 "num_base_bdevs_discovered": 3, 00:28:07.670 "num_base_bdevs_operational": 3, 00:28:07.670 "base_bdevs_list": [ 00:28:07.670 { 00:28:07.670 "name": null, 00:28:07.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.670 "is_configured": false, 00:28:07.670 "data_offset": 2048, 00:28:07.670 "data_size": 63488 00:28:07.670 }, 00:28:07.670 { 00:28:07.670 "name": "pt2", 00:28:07.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:07.670 "is_configured": true, 00:28:07.670 "data_offset": 2048, 00:28:07.670 "data_size": 63488 00:28:07.670 }, 00:28:07.670 { 00:28:07.670 "name": "pt3", 00:28:07.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:07.670 "is_configured": true, 00:28:07.670 "data_offset": 2048, 00:28:07.670 "data_size": 63488 00:28:07.670 }, 00:28:07.670 { 00:28:07.670 "name": "pt4", 00:28:07.670 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:07.670 "is_configured": true, 00:28:07.670 "data_offset": 2048, 00:28:07.670 "data_size": 63488 00:28:07.670 } 00:28:07.670 ] 00:28:07.670 }' 00:28:07.670 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:07.670 00:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.238 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:28:08.238 00:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:08.497 00:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:28:08.497 00:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:08.497 00:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:28:08.757 [2024-07-25 00:54:31.367658] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:08.757 00:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 7309bd26-75b8-41ec-a2d9-6c3e5c012312 '!=' 7309bd26-75b8-41ec-a2d9-6c3e5c012312 ']' 00:28:08.757 00:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 143348 00:28:08.757 00:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 143348 ']' 00:28:08.757 00:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 143348 00:28:08.757 00:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:28:08.757 00:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:08.757 00:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 143348 00:28:09.015 00:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:09.015 00:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:09.015 00:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 143348' 00:28:09.015 killing process with pid 143348 00:28:09.015 00:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 143348 00:28:09.015 [2024-07-25 00:54:31.415616] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:09.016 00:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 143348 00:28:09.016 [2024-07-25 00:54:31.415856] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:09.016 [2024-07-25 00:54:31.416096] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:09.016 [2024-07-25 00:54:31.416134] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:28:09.275 [2024-07-25 00:54:31.814896] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:10.673 ************************************ 00:28:10.673 END TEST raid_superblock_test 00:28:10.673 ************************************ 00:28:10.673 00:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:28:10.673 00:28:10.673 real 0m25.562s 00:28:10.673 user 0m45.859s 00:28:10.673 sys 0m3.811s 00:28:10.673 00:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:10.673 00:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.673 00:54:33 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:28:10.673 00:54:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:28:10.673 00:54:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:10.673 00:54:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:10.673 ************************************ 00:28:10.673 START TEST raid_read_error_test 00:28:10.673 ************************************ 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:10.673 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.2Y3WiCiPKB 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=144198 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 144198 /var/tmp/spdk-raid.sock 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 144198 ']' 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:10.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:10.674 00:54:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.941 [2024-07-25 00:54:33.374679] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:28:10.941 [2024-07-25 00:54:33.375137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144198 ] 00:28:10.941 [2024-07-25 00:54:33.554688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.199 [2024-07-25 00:54:33.729964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.458 [2024-07-25 00:54:33.917776] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:11.717 00:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.717 00:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:28:11.717 00:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:11.718 00:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:11.977 BaseBdev1_malloc 00:28:11.977 00:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:28:12.235 true 00:28:12.235 00:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:12.494 [2024-07-25 00:54:35.067968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:12.494 [2024-07-25 00:54:35.068231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.494 [2024-07-25 00:54:35.068311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:28:12.494 [2024-07-25 00:54:35.068405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.494 [2024-07-25 00:54:35.070763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.494 [2024-07-25 00:54:35.070919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:12.494 BaseBdev1 00:28:12.494 00:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:12.494 00:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:12.753 BaseBdev2_malloc 00:28:12.753 00:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:28:13.010 true 00:28:13.010 00:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:13.269 [2024-07-25 00:54:35.851948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:13.269 [2024-07-25 00:54:35.852218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:13.269 [2024-07-25 00:54:35.852293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:13.269 [2024-07-25 00:54:35.852511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:13.269 [2024-07-25 00:54:35.854833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:13.269 [2024-07-25 00:54:35.855003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:13.269 BaseBdev2 00:28:13.269 00:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:13.269 00:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:13.527 BaseBdev3_malloc 00:28:13.527 00:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:28:13.786 true 00:28:13.786 00:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:14.045 [2024-07-25 00:54:36.578289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:14.045 [2024-07-25 00:54:36.578582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.045 [2024-07-25 00:54:36.578653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:14.045 [2024-07-25 00:54:36.578749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.045 [2024-07-25 00:54:36.581049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.045 [2024-07-25 00:54:36.581246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:14.045 BaseBdev3 00:28:14.045 00:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:14.045 00:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:14.304 BaseBdev4_malloc 00:28:14.304 00:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:28:14.563 true 00:28:14.563 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:28:14.563 [2024-07-25 00:54:37.199651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:28:14.563 [2024-07-25 00:54:37.199895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.563 [2024-07-25 00:54:37.199989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:28:14.563 [2024-07-25 00:54:37.200204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.563 [2024-07-25 00:54:37.202540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.563 [2024-07-25 00:54:37.202712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:14.563 BaseBdev4 00:28:14.563 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:28:14.823 [2024-07-25 00:54:37.375708] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:14.823 [2024-07-25 00:54:37.377791] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:14.823 [2024-07-25 00:54:37.378006] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:14.823 [2024-07-25 00:54:37.378092] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:14.823 [2024-07-25 00:54:37.378525] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:28:14.823 [2024-07-25 00:54:37.378570] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:14.823 [2024-07-25 00:54:37.378801] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:14.823 [2024-07-25 00:54:37.379311] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:28:14.823 [2024-07-25 00:54:37.379420] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:28:14.823 [2024-07-25 00:54:37.379711] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.823 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.082 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:15.082 "name": "raid_bdev1", 00:28:15.082 "uuid": "3f288eab-8b86-4dff-90d6-ae94af5ab0db", 00:28:15.082 "strip_size_kb": 0, 00:28:15.082 "state": "online", 00:28:15.082 "raid_level": "raid1", 00:28:15.082 "superblock": true, 00:28:15.082 "num_base_bdevs": 4, 00:28:15.082 "num_base_bdevs_discovered": 4, 00:28:15.082 "num_base_bdevs_operational": 4, 00:28:15.082 "base_bdevs_list": [ 00:28:15.082 { 00:28:15.082 "name": "BaseBdev1", 00:28:15.082 "uuid": "a0520d6e-68a9-5a3a-968f-7ea70cb6f3b8", 00:28:15.082 "is_configured": true, 00:28:15.082 "data_offset": 2048, 00:28:15.082 "data_size": 63488 00:28:15.082 }, 00:28:15.082 { 00:28:15.082 "name": "BaseBdev2", 00:28:15.082 "uuid": "05c7fdf1-bd71-55e9-aa47-64621a7f97e1", 00:28:15.082 "is_configured": true, 00:28:15.082 "data_offset": 2048, 00:28:15.082 "data_size": 63488 00:28:15.082 }, 00:28:15.082 { 00:28:15.082 "name": "BaseBdev3", 00:28:15.082 "uuid": "bd19e5ea-2782-5315-93e8-156f7056b5eb", 00:28:15.082 "is_configured": true, 00:28:15.082 "data_offset": 2048, 00:28:15.082 "data_size": 63488 00:28:15.082 }, 00:28:15.082 { 00:28:15.082 "name": "BaseBdev4", 00:28:15.082 "uuid": "171ba57d-9082-52ce-893e-523a1cce0ac1", 00:28:15.082 "is_configured": true, 00:28:15.082 "data_offset": 2048, 00:28:15.082 "data_size": 63488 00:28:15.082 } 00:28:15.082 ] 00:28:15.082 }' 00:28:15.082 00:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:15.082 00:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.652 00:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:28:15.652 00:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:15.652 [2024-07-25 00:54:38.185250] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:16.591 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.850 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.110 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:17.110 "name": "raid_bdev1", 00:28:17.110 "uuid": "3f288eab-8b86-4dff-90d6-ae94af5ab0db", 00:28:17.110 "strip_size_kb": 0, 00:28:17.110 "state": "online", 00:28:17.110 "raid_level": "raid1", 00:28:17.110 "superblock": true, 00:28:17.110 "num_base_bdevs": 4, 00:28:17.110 "num_base_bdevs_discovered": 4, 00:28:17.110 "num_base_bdevs_operational": 4, 00:28:17.110 "base_bdevs_list": [ 00:28:17.110 { 00:28:17.110 "name": "BaseBdev1", 00:28:17.110 "uuid": "a0520d6e-68a9-5a3a-968f-7ea70cb6f3b8", 00:28:17.110 "is_configured": true, 00:28:17.110 "data_offset": 2048, 00:28:17.110 "data_size": 63488 00:28:17.110 }, 00:28:17.110 { 00:28:17.110 "name": "BaseBdev2", 00:28:17.110 "uuid": "05c7fdf1-bd71-55e9-aa47-64621a7f97e1", 00:28:17.110 "is_configured": true, 00:28:17.110 "data_offset": 2048, 00:28:17.110 "data_size": 63488 00:28:17.110 }, 00:28:17.110 { 00:28:17.110 "name": "BaseBdev3", 00:28:17.110 "uuid": "bd19e5ea-2782-5315-93e8-156f7056b5eb", 00:28:17.110 "is_configured": true, 00:28:17.110 "data_offset": 2048, 00:28:17.110 "data_size": 63488 00:28:17.110 }, 00:28:17.110 { 00:28:17.110 "name": "BaseBdev4", 00:28:17.110 "uuid": "171ba57d-9082-52ce-893e-523a1cce0ac1", 00:28:17.110 "is_configured": true, 00:28:17.110 "data_offset": 2048, 00:28:17.110 "data_size": 63488 00:28:17.110 } 00:28:17.110 ] 00:28:17.110 }' 00:28:17.110 00:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:17.110 00:54:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.679 00:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:17.679 [2024-07-25 00:54:40.302371] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:17.679 [2024-07-25 00:54:40.302612] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:17.679 [2024-07-25 00:54:40.305255] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:17.679 [2024-07-25 00:54:40.305428] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:17.679 [2024-07-25 00:54:40.305566] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:17.679 [2024-07-25 00:54:40.305761] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:28:17.679 0 00:28:17.679 00:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 144198 00:28:17.679 00:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 144198 ']' 00:28:17.679 00:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 144198 00:28:17.679 00:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:28:17.679 00:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:17.679 00:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144198 00:28:17.939 00:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:17.939 00:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:17.939 00:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144198' 00:28:17.939 killing process with pid 144198 00:28:17.939 00:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 144198 00:28:17.939 [2024-07-25 00:54:40.345577] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:17.939 00:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 144198 00:28:18.198 [2024-07-25 00:54:40.646648] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:19.579 00:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.2Y3WiCiPKB 00:28:19.579 00:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:28:19.579 00:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:28:19.579 ************************************ 00:28:19.579 END TEST raid_read_error_test 00:28:19.579 ************************************ 00:28:19.579 00:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:28:19.579 00:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:28:19.579 00:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:19.579 00:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:28:19.579 00:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:28:19.579 00:28:19.579 real 0m8.639s 00:28:19.579 user 0m12.849s 00:28:19.579 sys 0m1.239s 00:28:19.579 00:54:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:19.579 00:54:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.579 00:54:41 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:28:19.579 00:54:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:28:19.579 00:54:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.579 00:54:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:19.579 ************************************ 00:28:19.579 START TEST raid_write_error_test 00:28:19.579 ************************************ 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:28:19.579 00:54:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.5HMBDuR5VG 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=144408 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 144408 /var/tmp/spdk-raid.sock 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 144408 ']' 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:19.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:19.579 00:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.579 [2024-07-25 00:54:42.097523] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:28:19.579 [2024-07-25 00:54:42.097977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144408 ] 00:28:19.839 [2024-07-25 00:54:42.273074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.839 [2024-07-25 00:54:42.454157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.099 [2024-07-25 00:54:42.640854] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:20.386 00:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:20.386 00:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:28:20.645 00:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:20.645 00:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:20.645 BaseBdev1_malloc 00:28:20.645 00:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:28:20.910 true 00:28:20.911 00:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:21.172 [2024-07-25 00:54:43.731464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:21.172 [2024-07-25 00:54:43.731730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.172 [2024-07-25 00:54:43.731821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:28:21.172 [2024-07-25 00:54:43.731917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.172 [2024-07-25 00:54:43.734229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.172 [2024-07-25 00:54:43.734432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:21.172 BaseBdev1 00:28:21.172 00:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:21.172 00:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:21.431 BaseBdev2_malloc 00:28:21.431 00:54:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:28:21.690 true 00:28:21.690 00:54:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:21.949 [2024-07-25 00:54:44.386599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:21.949 [2024-07-25 00:54:44.386845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.949 [2024-07-25 00:54:44.387006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:21.949 [2024-07-25 00:54:44.387103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.949 [2024-07-25 00:54:44.389400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.949 [2024-07-25 00:54:44.389551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:21.949 BaseBdev2 00:28:21.949 00:54:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:21.949 00:54:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:22.209 BaseBdev3_malloc 00:28:22.209 00:54:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:28:22.209 true 00:28:22.468 00:54:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:22.468 [2024-07-25 00:54:45.020517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:22.468 [2024-07-25 00:54:45.020798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.468 [2024-07-25 00:54:45.020867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:22.468 [2024-07-25 00:54:45.020961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.468 [2024-07-25 00:54:45.023246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.468 [2024-07-25 00:54:45.023404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:22.468 BaseBdev3 00:28:22.468 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:22.468 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:22.728 BaseBdev4_malloc 00:28:22.728 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:28:22.986 true 00:28:22.986 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:28:23.245 [2024-07-25 00:54:45.652317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:28:23.245 [2024-07-25 00:54:45.652553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.245 [2024-07-25 00:54:45.652698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:28:23.245 [2024-07-25 00:54:45.652796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.245 [2024-07-25 00:54:45.655155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.245 [2024-07-25 00:54:45.655312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:23.245 BaseBdev4 00:28:23.245 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:28:23.246 [2024-07-25 00:54:45.832375] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:23.246 [2024-07-25 00:54:45.834446] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:23.246 [2024-07-25 00:54:45.834659] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:23.246 [2024-07-25 00:54:45.834744] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:23.246 [2024-07-25 00:54:45.835071] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:28:23.246 [2024-07-25 00:54:45.835169] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:23.246 [2024-07-25 00:54:45.835367] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:23.246 [2024-07-25 00:54:45.835824] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:28:23.246 [2024-07-25 00:54:45.835934] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:28:23.246 [2024-07-25 00:54:45.836194] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.246 00:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.506 00:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:23.506 "name": "raid_bdev1", 00:28:23.506 "uuid": "a4659d09-88d8-4bf9-8b36-a871401d44f6", 00:28:23.506 "strip_size_kb": 0, 00:28:23.506 "state": "online", 00:28:23.506 "raid_level": "raid1", 00:28:23.506 "superblock": true, 00:28:23.506 "num_base_bdevs": 4, 00:28:23.506 "num_base_bdevs_discovered": 4, 00:28:23.506 "num_base_bdevs_operational": 4, 00:28:23.506 "base_bdevs_list": [ 00:28:23.506 { 00:28:23.506 "name": "BaseBdev1", 00:28:23.506 "uuid": "35028ad6-f2cd-5c99-897d-e005dac778a0", 00:28:23.506 "is_configured": true, 00:28:23.506 "data_offset": 2048, 00:28:23.506 "data_size": 63488 00:28:23.506 }, 00:28:23.506 { 00:28:23.506 "name": "BaseBdev2", 00:28:23.506 "uuid": "4e08b838-d8ae-5096-8660-812ace434c87", 00:28:23.506 "is_configured": true, 00:28:23.506 "data_offset": 2048, 00:28:23.506 "data_size": 63488 00:28:23.506 }, 00:28:23.506 { 00:28:23.506 "name": "BaseBdev3", 00:28:23.506 "uuid": "418a0914-ebab-528b-9e7e-535f0ddebb3b", 00:28:23.506 "is_configured": true, 00:28:23.506 "data_offset": 2048, 00:28:23.506 "data_size": 63488 00:28:23.506 }, 00:28:23.506 { 00:28:23.506 "name": "BaseBdev4", 00:28:23.506 "uuid": "20133b4a-e9f1-5b81-833f-4bfddc6e4118", 00:28:23.506 "is_configured": true, 00:28:23.506 "data_offset": 2048, 00:28:23.506 "data_size": 63488 00:28:23.506 } 00:28:23.506 ] 00:28:23.506 }' 00:28:23.506 00:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:23.506 00:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.075 00:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:28:24.075 00:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:24.075 [2024-07-25 00:54:46.653730] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:25.014 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:28:25.273 [2024-07-25 00:54:47.836063] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:28:25.273 [2024-07-25 00:54:47.836446] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:25.273 [2024-07-25 00:54:47.836745] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.273 00:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.532 00:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:25.532 "name": "raid_bdev1", 00:28:25.532 "uuid": "a4659d09-88d8-4bf9-8b36-a871401d44f6", 00:28:25.532 "strip_size_kb": 0, 00:28:25.532 "state": "online", 00:28:25.532 "raid_level": "raid1", 00:28:25.532 "superblock": true, 00:28:25.532 "num_base_bdevs": 4, 00:28:25.532 "num_base_bdevs_discovered": 3, 00:28:25.532 "num_base_bdevs_operational": 3, 00:28:25.532 "base_bdevs_list": [ 00:28:25.532 { 00:28:25.532 "name": null, 00:28:25.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.532 "is_configured": false, 00:28:25.532 "data_offset": 2048, 00:28:25.532 "data_size": 63488 00:28:25.532 }, 00:28:25.532 { 00:28:25.532 "name": "BaseBdev2", 00:28:25.532 "uuid": "4e08b838-d8ae-5096-8660-812ace434c87", 00:28:25.532 "is_configured": true, 00:28:25.532 "data_offset": 2048, 00:28:25.532 "data_size": 63488 00:28:25.532 }, 00:28:25.532 { 00:28:25.532 "name": "BaseBdev3", 00:28:25.532 "uuid": "418a0914-ebab-528b-9e7e-535f0ddebb3b", 00:28:25.532 "is_configured": true, 00:28:25.532 "data_offset": 2048, 00:28:25.532 "data_size": 63488 00:28:25.532 }, 00:28:25.532 { 00:28:25.532 "name": "BaseBdev4", 00:28:25.532 "uuid": "20133b4a-e9f1-5b81-833f-4bfddc6e4118", 00:28:25.532 "is_configured": true, 00:28:25.532 "data_offset": 2048, 00:28:25.532 "data_size": 63488 00:28:25.532 } 00:28:25.532 ] 00:28:25.532 }' 00:28:25.532 00:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:25.532 00:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.102 00:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:26.361 [2024-07-25 00:54:48.901912] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:26.361 [2024-07-25 00:54:48.902203] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:26.361 [2024-07-25 00:54:48.904713] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:26.361 [2024-07-25 00:54:48.904868] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:26.361 [2024-07-25 00:54:48.904988] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:26.361 [2024-07-25 00:54:48.905083] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:28:26.361 0 00:28:26.361 00:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 144408 00:28:26.361 00:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 144408 ']' 00:28:26.361 00:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 144408 00:28:26.361 00:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:28:26.361 00:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:26.361 00:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144408 00:28:26.361 00:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:26.361 00:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:26.361 00:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144408' 00:28:26.361 killing process with pid 144408 00:28:26.361 00:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 144408 00:28:26.361 [2024-07-25 00:54:48.952463] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:26.361 00:54:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 144408 00:28:26.621 [2024-07-25 00:54:49.240631] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:28.001 00:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.5HMBDuR5VG 00:28:28.001 00:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:28:28.001 00:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:28:28.001 ************************************ 00:28:28.001 END TEST raid_write_error_test 00:28:28.001 ************************************ 00:28:28.001 00:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:28:28.001 00:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:28:28.001 00:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:28.001 00:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:28:28.001 00:54:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:28:28.001 00:28:28.001 real 0m8.511s 00:28:28.001 user 0m12.650s 00:28:28.001 sys 0m1.190s 00:28:28.001 00:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:28.001 00:54:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.001 00:54:50 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:28:28.001 00:54:50 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:28:28.001 00:54:50 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:28:28.001 00:54:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:28.001 00:54:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:28.001 00:54:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:28.001 ************************************ 00:28:28.001 START TEST raid_rebuild_test 00:28:28.001 ************************************ 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=144619 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 144619 /var/tmp/spdk-raid.sock 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 144619 ']' 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:28.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.001 00:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.261 [2024-07-25 00:54:50.674061] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:28:28.261 [2024-07-25 00:54:50.674490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144619 ] 00:28:28.261 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:28.261 Zero copy mechanism will not be used. 00:28:28.261 [2024-07-25 00:54:50.855886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.520 [2024-07-25 00:54:51.044966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.780 [2024-07-25 00:54:51.230168] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:29.039 00:54:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.039 00:54:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:28:29.039 00:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:29.039 00:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:29.298 BaseBdev1_malloc 00:28:29.298 00:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:29.558 [2024-07-25 00:54:52.101311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:29.558 [2024-07-25 00:54:52.101541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.558 [2024-07-25 00:54:52.101664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:28:29.558 [2024-07-25 00:54:52.101779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.558 [2024-07-25 00:54:52.104251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.558 [2024-07-25 00:54:52.104425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:29.558 BaseBdev1 00:28:29.558 00:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:29.558 00:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:29.817 BaseBdev2_malloc 00:28:29.817 00:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:30.076 [2024-07-25 00:54:52.526856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:30.076 [2024-07-25 00:54:52.527132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.076 [2024-07-25 00:54:52.527277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:28:30.076 [2024-07-25 00:54:52.527369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.076 [2024-07-25 00:54:52.529693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.076 [2024-07-25 00:54:52.529854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:30.076 BaseBdev2 00:28:30.076 00:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:30.335 spare_malloc 00:28:30.335 00:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:30.335 spare_delay 00:28:30.594 00:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:30.594 [2024-07-25 00:54:53.157374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:30.594 [2024-07-25 00:54:53.157669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.594 [2024-07-25 00:54:53.157741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:30.594 [2024-07-25 00:54:53.157976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.594 [2024-07-25 00:54:53.160301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.594 [2024-07-25 00:54:53.160460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:30.594 spare 00:28:30.594 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:30.870 [2024-07-25 00:54:53.338375] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:30.870 [2024-07-25 00:54:53.340480] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:30.870 [2024-07-25 00:54:53.340725] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:28:30.870 [2024-07-25 00:54:53.340769] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:30.870 [2024-07-25 00:54:53.340998] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:28:30.870 [2024-07-25 00:54:53.341420] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:28:30.870 [2024-07-25 00:54:53.341526] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:28:30.870 [2024-07-25 00:54:53.341767] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.870 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.129 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:31.129 "name": "raid_bdev1", 00:28:31.129 "uuid": "be8e0cde-0245-461e-a6e2-89ad5678e435", 00:28:31.129 "strip_size_kb": 0, 00:28:31.129 "state": "online", 00:28:31.129 "raid_level": "raid1", 00:28:31.129 "superblock": false, 00:28:31.129 "num_base_bdevs": 2, 00:28:31.129 "num_base_bdevs_discovered": 2, 00:28:31.129 "num_base_bdevs_operational": 2, 00:28:31.129 "base_bdevs_list": [ 00:28:31.129 { 00:28:31.129 "name": "BaseBdev1", 00:28:31.129 "uuid": "cb5f4a77-4c16-5239-afd5-1929330f9ea7", 00:28:31.129 "is_configured": true, 00:28:31.129 "data_offset": 0, 00:28:31.129 "data_size": 65536 00:28:31.129 }, 00:28:31.129 { 00:28:31.129 "name": "BaseBdev2", 00:28:31.129 "uuid": "d7cd5a97-0de0-50c3-8304-c1ae45c377a6", 00:28:31.129 "is_configured": true, 00:28:31.129 "data_offset": 0, 00:28:31.129 "data_size": 65536 00:28:31.129 } 00:28:31.129 ] 00:28:31.129 }' 00:28:31.129 00:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:31.129 00:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.697 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:31.697 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:31.956 [2024-07-25 00:54:54.358777] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:31.956 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:32.215 [2024-07-25 00:54:54.754931] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:32.215 /dev/nbd0 00:28:32.215 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:32.215 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:32.215 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:32.215 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:32.215 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:32.215 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:32.215 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:32.215 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:32.215 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:32.215 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:32.215 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:32.215 1+0 records in 00:28:32.216 1+0 records out 00:28:32.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405061 s, 10.1 MB/s 00:28:32.216 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:32.216 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:32.216 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:32.216 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:32.216 00:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:32.216 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:32.216 00:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:32.216 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:32.216 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:32.216 00:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:28:37.487 65536+0 records in 00:28:37.487 65536+0 records out 00:28:37.487 33554432 bytes (34 MB, 32 MiB) copied, 4.75428 s, 7.1 MB/s 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:37.487 [2024-07-25 00:54:59.835245] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.487 00:54:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:37.487 [2024-07-25 00:54:59.994992] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.487 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.745 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:37.745 "name": "raid_bdev1", 00:28:37.745 "uuid": "be8e0cde-0245-461e-a6e2-89ad5678e435", 00:28:37.745 "strip_size_kb": 0, 00:28:37.745 "state": "online", 00:28:37.745 "raid_level": "raid1", 00:28:37.745 "superblock": false, 00:28:37.745 "num_base_bdevs": 2, 00:28:37.745 "num_base_bdevs_discovered": 1, 00:28:37.745 "num_base_bdevs_operational": 1, 00:28:37.745 "base_bdevs_list": [ 00:28:37.745 { 00:28:37.745 "name": null, 00:28:37.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.745 "is_configured": false, 00:28:37.745 "data_offset": 0, 00:28:37.745 "data_size": 65536 00:28:37.745 }, 00:28:37.745 { 00:28:37.745 "name": "BaseBdev2", 00:28:37.745 "uuid": "d7cd5a97-0de0-50c3-8304-c1ae45c377a6", 00:28:37.745 "is_configured": true, 00:28:37.745 "data_offset": 0, 00:28:37.745 "data_size": 65536 00:28:37.745 } 00:28:37.745 ] 00:28:37.745 }' 00:28:37.746 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:37.746 00:55:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.312 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:38.312 [2024-07-25 00:55:00.863161] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:38.312 [2024-07-25 00:55:00.878640] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09960 00:28:38.312 [2024-07-25 00:55:00.880681] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:38.312 00:55:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:39.248 00:55:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:39.248 00:55:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:39.248 00:55:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:39.248 00:55:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:39.248 00:55:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:39.508 00:55:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.508 00:55:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.508 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:39.508 "name": "raid_bdev1", 00:28:39.508 "uuid": "be8e0cde-0245-461e-a6e2-89ad5678e435", 00:28:39.508 "strip_size_kb": 0, 00:28:39.508 "state": "online", 00:28:39.508 "raid_level": "raid1", 00:28:39.508 "superblock": false, 00:28:39.508 "num_base_bdevs": 2, 00:28:39.508 "num_base_bdevs_discovered": 2, 00:28:39.508 "num_base_bdevs_operational": 2, 00:28:39.508 "process": { 00:28:39.508 "type": "rebuild", 00:28:39.508 "target": "spare", 00:28:39.508 "progress": { 00:28:39.508 "blocks": 24576, 00:28:39.508 "percent": 37 00:28:39.508 } 00:28:39.508 }, 00:28:39.508 "base_bdevs_list": [ 00:28:39.508 { 00:28:39.508 "name": "spare", 00:28:39.508 "uuid": "0bf7c4fd-b1d1-5b04-8e8a-d2de870e179f", 00:28:39.508 "is_configured": true, 00:28:39.508 "data_offset": 0, 00:28:39.508 "data_size": 65536 00:28:39.508 }, 00:28:39.508 { 00:28:39.508 "name": "BaseBdev2", 00:28:39.508 "uuid": "d7cd5a97-0de0-50c3-8304-c1ae45c377a6", 00:28:39.508 "is_configured": true, 00:28:39.508 "data_offset": 0, 00:28:39.508 "data_size": 65536 00:28:39.508 } 00:28:39.508 ] 00:28:39.508 }' 00:28:39.508 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:39.766 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:39.766 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:39.766 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:39.766 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:40.025 [2024-07-25 00:55:02.454272] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:40.025 [2024-07-25 00:55:02.489897] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:40.025 [2024-07-25 00:55:02.490098] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:40.025 [2024-07-25 00:55:02.490144] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:40.025 [2024-07-25 00:55:02.490214] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.025 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.284 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:40.284 "name": "raid_bdev1", 00:28:40.284 "uuid": "be8e0cde-0245-461e-a6e2-89ad5678e435", 00:28:40.284 "strip_size_kb": 0, 00:28:40.284 "state": "online", 00:28:40.284 "raid_level": "raid1", 00:28:40.284 "superblock": false, 00:28:40.284 "num_base_bdevs": 2, 00:28:40.284 "num_base_bdevs_discovered": 1, 00:28:40.284 "num_base_bdevs_operational": 1, 00:28:40.284 "base_bdevs_list": [ 00:28:40.284 { 00:28:40.284 "name": null, 00:28:40.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.284 "is_configured": false, 00:28:40.284 "data_offset": 0, 00:28:40.284 "data_size": 65536 00:28:40.284 }, 00:28:40.284 { 00:28:40.284 "name": "BaseBdev2", 00:28:40.284 "uuid": "d7cd5a97-0de0-50c3-8304-c1ae45c377a6", 00:28:40.284 "is_configured": true, 00:28:40.284 "data_offset": 0, 00:28:40.284 "data_size": 65536 00:28:40.284 } 00:28:40.284 ] 00:28:40.284 }' 00:28:40.284 00:55:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:40.284 00:55:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.851 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:40.851 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:40.851 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:40.851 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:40.851 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:40.851 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.851 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.851 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:40.851 "name": "raid_bdev1", 00:28:40.851 "uuid": "be8e0cde-0245-461e-a6e2-89ad5678e435", 00:28:40.851 "strip_size_kb": 0, 00:28:40.851 "state": "online", 00:28:40.851 "raid_level": "raid1", 00:28:40.851 "superblock": false, 00:28:40.851 "num_base_bdevs": 2, 00:28:40.851 "num_base_bdevs_discovered": 1, 00:28:40.851 "num_base_bdevs_operational": 1, 00:28:40.851 "base_bdevs_list": [ 00:28:40.851 { 00:28:40.851 "name": null, 00:28:40.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.851 "is_configured": false, 00:28:40.851 "data_offset": 0, 00:28:40.851 "data_size": 65536 00:28:40.851 }, 00:28:40.851 { 00:28:40.851 "name": "BaseBdev2", 00:28:40.851 "uuid": "d7cd5a97-0de0-50c3-8304-c1ae45c377a6", 00:28:40.851 "is_configured": true, 00:28:40.851 "data_offset": 0, 00:28:40.851 "data_size": 65536 00:28:40.851 } 00:28:40.851 ] 00:28:40.851 }' 00:28:40.851 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:41.109 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:41.109 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:41.109 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:41.109 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:41.367 [2024-07-25 00:55:03.836803] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:41.367 [2024-07-25 00:55:03.851846] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:28:41.367 [2024-07-25 00:55:03.853915] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:41.367 00:55:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:42.303 00:55:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:42.303 00:55:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:42.303 00:55:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:42.303 00:55:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:42.303 00:55:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:42.303 00:55:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.303 00:55:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:42.561 "name": "raid_bdev1", 00:28:42.561 "uuid": "be8e0cde-0245-461e-a6e2-89ad5678e435", 00:28:42.561 "strip_size_kb": 0, 00:28:42.561 "state": "online", 00:28:42.561 "raid_level": "raid1", 00:28:42.561 "superblock": false, 00:28:42.561 "num_base_bdevs": 2, 00:28:42.561 "num_base_bdevs_discovered": 2, 00:28:42.561 "num_base_bdevs_operational": 2, 00:28:42.561 "process": { 00:28:42.561 "type": "rebuild", 00:28:42.561 "target": "spare", 00:28:42.561 "progress": { 00:28:42.561 "blocks": 24576, 00:28:42.561 "percent": 37 00:28:42.561 } 00:28:42.561 }, 00:28:42.561 "base_bdevs_list": [ 00:28:42.561 { 00:28:42.561 "name": "spare", 00:28:42.561 "uuid": "0bf7c4fd-b1d1-5b04-8e8a-d2de870e179f", 00:28:42.561 "is_configured": true, 00:28:42.561 "data_offset": 0, 00:28:42.561 "data_size": 65536 00:28:42.561 }, 00:28:42.561 { 00:28:42.561 "name": "BaseBdev2", 00:28:42.561 "uuid": "d7cd5a97-0de0-50c3-8304-c1ae45c377a6", 00:28:42.561 "is_configured": true, 00:28:42.561 "data_offset": 0, 00:28:42.561 "data_size": 65536 00:28:42.561 } 00:28:42.561 ] 00:28:42.561 }' 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=782 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.561 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.821 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:42.821 "name": "raid_bdev1", 00:28:42.821 "uuid": "be8e0cde-0245-461e-a6e2-89ad5678e435", 00:28:42.821 "strip_size_kb": 0, 00:28:42.821 "state": "online", 00:28:42.821 "raid_level": "raid1", 00:28:42.821 "superblock": false, 00:28:42.821 "num_base_bdevs": 2, 00:28:42.821 "num_base_bdevs_discovered": 2, 00:28:42.821 "num_base_bdevs_operational": 2, 00:28:42.821 "process": { 00:28:42.821 "type": "rebuild", 00:28:42.821 "target": "spare", 00:28:42.821 "progress": { 00:28:42.821 "blocks": 30720, 00:28:42.821 "percent": 46 00:28:42.821 } 00:28:42.821 }, 00:28:42.821 "base_bdevs_list": [ 00:28:42.821 { 00:28:42.821 "name": "spare", 00:28:42.821 "uuid": "0bf7c4fd-b1d1-5b04-8e8a-d2de870e179f", 00:28:42.821 "is_configured": true, 00:28:42.821 "data_offset": 0, 00:28:42.821 "data_size": 65536 00:28:42.821 }, 00:28:42.821 { 00:28:42.821 "name": "BaseBdev2", 00:28:42.821 "uuid": "d7cd5a97-0de0-50c3-8304-c1ae45c377a6", 00:28:42.821 "is_configured": true, 00:28:42.821 "data_offset": 0, 00:28:42.821 "data_size": 65536 00:28:42.821 } 00:28:42.821 ] 00:28:42.821 }' 00:28:42.821 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:42.821 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:42.821 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:43.080 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:43.080 00:55:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:44.067 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:44.067 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:44.067 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:44.067 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:44.067 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:44.067 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:44.067 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.067 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.326 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:44.326 "name": "raid_bdev1", 00:28:44.326 "uuid": "be8e0cde-0245-461e-a6e2-89ad5678e435", 00:28:44.326 "strip_size_kb": 0, 00:28:44.326 "state": "online", 00:28:44.326 "raid_level": "raid1", 00:28:44.326 "superblock": false, 00:28:44.326 "num_base_bdevs": 2, 00:28:44.326 "num_base_bdevs_discovered": 2, 00:28:44.326 "num_base_bdevs_operational": 2, 00:28:44.326 "process": { 00:28:44.326 "type": "rebuild", 00:28:44.326 "target": "spare", 00:28:44.326 "progress": { 00:28:44.326 "blocks": 57344, 00:28:44.326 "percent": 87 00:28:44.326 } 00:28:44.326 }, 00:28:44.326 "base_bdevs_list": [ 00:28:44.326 { 00:28:44.326 "name": "spare", 00:28:44.326 "uuid": "0bf7c4fd-b1d1-5b04-8e8a-d2de870e179f", 00:28:44.326 "is_configured": true, 00:28:44.326 "data_offset": 0, 00:28:44.326 "data_size": 65536 00:28:44.326 }, 00:28:44.326 { 00:28:44.326 "name": "BaseBdev2", 00:28:44.326 "uuid": "d7cd5a97-0de0-50c3-8304-c1ae45c377a6", 00:28:44.326 "is_configured": true, 00:28:44.326 "data_offset": 0, 00:28:44.326 "data_size": 65536 00:28:44.326 } 00:28:44.326 ] 00:28:44.326 }' 00:28:44.326 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:44.326 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:44.326 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:44.326 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:44.326 00:55:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:44.586 [2024-07-25 00:55:07.071584] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:44.586 [2024-07-25 00:55:07.071785] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:44.586 [2024-07-25 00:55:07.071938] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:45.522 00:55:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:45.522 00:55:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:45.522 00:55:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:45.522 00:55:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:45.522 00:55:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:45.522 00:55:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:45.522 00:55:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.522 00:55:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.522 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:45.522 "name": "raid_bdev1", 00:28:45.522 "uuid": "be8e0cde-0245-461e-a6e2-89ad5678e435", 00:28:45.522 "strip_size_kb": 0, 00:28:45.522 "state": "online", 00:28:45.522 "raid_level": "raid1", 00:28:45.522 "superblock": false, 00:28:45.522 "num_base_bdevs": 2, 00:28:45.522 "num_base_bdevs_discovered": 2, 00:28:45.522 "num_base_bdevs_operational": 2, 00:28:45.522 "base_bdevs_list": [ 00:28:45.522 { 00:28:45.522 "name": "spare", 00:28:45.522 "uuid": "0bf7c4fd-b1d1-5b04-8e8a-d2de870e179f", 00:28:45.522 "is_configured": true, 00:28:45.522 "data_offset": 0, 00:28:45.522 "data_size": 65536 00:28:45.522 }, 00:28:45.522 { 00:28:45.522 "name": "BaseBdev2", 00:28:45.522 "uuid": "d7cd5a97-0de0-50c3-8304-c1ae45c377a6", 00:28:45.522 "is_configured": true, 00:28:45.522 "data_offset": 0, 00:28:45.522 "data_size": 65536 00:28:45.522 } 00:28:45.522 ] 00:28:45.522 }' 00:28:45.522 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:45.522 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:45.522 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:45.522 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:45.522 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:28:45.522 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:45.522 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:45.522 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:45.522 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:45.522 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:45.781 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.781 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.781 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:45.781 "name": "raid_bdev1", 00:28:45.781 "uuid": "be8e0cde-0245-461e-a6e2-89ad5678e435", 00:28:45.781 "strip_size_kb": 0, 00:28:45.781 "state": "online", 00:28:45.781 "raid_level": "raid1", 00:28:45.781 "superblock": false, 00:28:45.781 "num_base_bdevs": 2, 00:28:45.781 "num_base_bdevs_discovered": 2, 00:28:45.781 "num_base_bdevs_operational": 2, 00:28:45.781 "base_bdevs_list": [ 00:28:45.781 { 00:28:45.781 "name": "spare", 00:28:45.781 "uuid": "0bf7c4fd-b1d1-5b04-8e8a-d2de870e179f", 00:28:45.781 "is_configured": true, 00:28:45.781 "data_offset": 0, 00:28:45.781 "data_size": 65536 00:28:45.781 }, 00:28:45.781 { 00:28:45.781 "name": "BaseBdev2", 00:28:45.781 "uuid": "d7cd5a97-0de0-50c3-8304-c1ae45c377a6", 00:28:45.781 "is_configured": true, 00:28:45.781 "data_offset": 0, 00:28:45.781 "data_size": 65536 00:28:45.781 } 00:28:45.781 ] 00:28:45.781 }' 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.039 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.297 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:46.297 "name": "raid_bdev1", 00:28:46.297 "uuid": "be8e0cde-0245-461e-a6e2-89ad5678e435", 00:28:46.297 "strip_size_kb": 0, 00:28:46.297 "state": "online", 00:28:46.297 "raid_level": "raid1", 00:28:46.297 "superblock": false, 00:28:46.297 "num_base_bdevs": 2, 00:28:46.297 "num_base_bdevs_discovered": 2, 00:28:46.297 "num_base_bdevs_operational": 2, 00:28:46.297 "base_bdevs_list": [ 00:28:46.297 { 00:28:46.297 "name": "spare", 00:28:46.297 "uuid": "0bf7c4fd-b1d1-5b04-8e8a-d2de870e179f", 00:28:46.297 "is_configured": true, 00:28:46.297 "data_offset": 0, 00:28:46.297 "data_size": 65536 00:28:46.297 }, 00:28:46.297 { 00:28:46.297 "name": "BaseBdev2", 00:28:46.297 "uuid": "d7cd5a97-0de0-50c3-8304-c1ae45c377a6", 00:28:46.297 "is_configured": true, 00:28:46.297 "data_offset": 0, 00:28:46.297 "data_size": 65536 00:28:46.297 } 00:28:46.297 ] 00:28:46.297 }' 00:28:46.297 00:55:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:46.297 00:55:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:46.865 00:55:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:47.124 [2024-07-25 00:55:09.599209] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:47.124 [2024-07-25 00:55:09.599406] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:47.124 [2024-07-25 00:55:09.599685] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:47.124 [2024-07-25 00:55:09.599845] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:47.124 [2024-07-25 00:55:09.599924] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:28:47.124 00:55:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.124 00:55:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:47.383 00:55:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:47.642 /dev/nbd0 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:47.642 1+0 records in 00:28:47.642 1+0 records out 00:28:47.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603367 s, 6.8 MB/s 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:47.642 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:47.643 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:47.643 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:47.643 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:47.643 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:47.901 /dev/nbd1 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:47.901 1+0 records in 00:28:47.901 1+0 records out 00:28:47.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479009 s, 8.6 MB/s 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:47.901 00:55:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:48.160 00:55:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:48.160 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:48.160 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:48.160 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:48.160 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:48.160 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:48.160 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:48.419 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:48.419 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:48.419 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:48.419 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:48.419 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:48.419 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:48.419 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:48.419 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:48.419 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:48.419 00:55:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 144619 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 144619 ']' 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 144619 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144619 00:28:48.678 killing process with pid 144619 00:28:48.678 Received shutdown signal, test time was about 60.000000 seconds 00:28:48.678 00:28:48.678 Latency(us) 00:28:48.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.678 =================================================================================================================== 00:28:48.678 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144619' 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 144619 00:28:48.678 00:55:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 144619 00:28:48.678 [2024-07-25 00:55:11.180452] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:48.937 [2024-07-25 00:55:11.447102] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:50.315 ************************************ 00:28:50.315 END TEST raid_rebuild_test 00:28:50.315 ************************************ 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:28:50.315 00:28:50.315 real 0m22.063s 00:28:50.315 user 0m29.669s 00:28:50.315 sys 0m4.307s 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:50.315 00:55:12 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:28:50.315 00:55:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:50.315 00:55:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.315 00:55:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:50.315 ************************************ 00:28:50.315 START TEST raid_rebuild_test_sb 00:28:50.315 ************************************ 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=145156 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 145156 /var/tmp/spdk-raid.sock 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 145156 ']' 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:50.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:50.315 00:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.315 [2024-07-25 00:55:12.819114] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:28:50.315 [2024-07-25 00:55:12.819567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145156 ] 00:28:50.315 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:50.315 Zero copy mechanism will not be used. 00:28:50.574 [2024-07-25 00:55:12.999716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.574 [2024-07-25 00:55:13.179600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.833 [2024-07-25 00:55:13.374355] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:51.092 00:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:51.092 00:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:28:51.092 00:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:51.092 00:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:51.350 BaseBdev1_malloc 00:28:51.608 00:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:51.608 [2024-07-25 00:55:14.234091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:51.608 [2024-07-25 00:55:14.234339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:51.608 [2024-07-25 00:55:14.234430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:28:51.608 [2024-07-25 00:55:14.234531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:51.608 [2024-07-25 00:55:14.236850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:51.608 [2024-07-25 00:55:14.237011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:51.608 BaseBdev1 00:28:51.608 00:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:51.608 00:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:51.871 BaseBdev2_malloc 00:28:51.871 00:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:52.130 [2024-07-25 00:55:14.631219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:52.130 [2024-07-25 00:55:14.631488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:52.130 [2024-07-25 00:55:14.631576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:28:52.130 [2024-07-25 00:55:14.631683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:52.130 [2024-07-25 00:55:14.633993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:52.130 [2024-07-25 00:55:14.634165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:52.130 BaseBdev2 00:28:52.130 00:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:52.389 spare_malloc 00:28:52.389 00:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:52.389 spare_delay 00:28:52.389 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:52.647 [2024-07-25 00:55:15.192465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:52.647 [2024-07-25 00:55:15.192697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:52.647 [2024-07-25 00:55:15.192767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:52.647 [2024-07-25 00:55:15.192889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:52.647 [2024-07-25 00:55:15.195389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:52.647 [2024-07-25 00:55:15.195541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:52.647 spare 00:28:52.647 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:52.906 [2024-07-25 00:55:15.432578] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:52.906 [2024-07-25 00:55:15.434646] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:52.906 [2024-07-25 00:55:15.434976] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:28:52.906 [2024-07-25 00:55:15.435081] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:52.906 [2024-07-25 00:55:15.435234] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:28:52.906 [2024-07-25 00:55:15.435709] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:28:52.906 [2024-07-25 00:55:15.435817] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:28:52.906 [2024-07-25 00:55:15.436044] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.906 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.165 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:53.165 "name": "raid_bdev1", 00:28:53.165 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:28:53.165 "strip_size_kb": 0, 00:28:53.165 "state": "online", 00:28:53.165 "raid_level": "raid1", 00:28:53.165 "superblock": true, 00:28:53.165 "num_base_bdevs": 2, 00:28:53.165 "num_base_bdevs_discovered": 2, 00:28:53.165 "num_base_bdevs_operational": 2, 00:28:53.165 "base_bdevs_list": [ 00:28:53.165 { 00:28:53.165 "name": "BaseBdev1", 00:28:53.165 "uuid": "1cb715f8-b19b-5c33-851d-edfa1e3ca583", 00:28:53.165 "is_configured": true, 00:28:53.165 "data_offset": 2048, 00:28:53.165 "data_size": 63488 00:28:53.165 }, 00:28:53.165 { 00:28:53.165 "name": "BaseBdev2", 00:28:53.165 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:28:53.165 "is_configured": true, 00:28:53.165 "data_offset": 2048, 00:28:53.165 "data_size": 63488 00:28:53.165 } 00:28:53.165 ] 00:28:53.165 }' 00:28:53.165 00:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:53.165 00:55:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.733 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:53.733 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:53.992 [2024-07-25 00:55:16.416922] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:53.992 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:28:53.992 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:53.992 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:54.251 [2024-07-25 00:55:16.828809] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:54.251 /dev/nbd0 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:54.251 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:54.252 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:54.252 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:54.252 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:54.252 1+0 records in 00:28:54.252 1+0 records out 00:28:54.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560977 s, 7.3 MB/s 00:28:54.252 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.252 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:54.252 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.511 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:54.511 00:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:54.511 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:54.511 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:54.511 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:54.511 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:54.511 00:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:28:59.785 63488+0 records in 00:28:59.785 63488+0 records out 00:28:59.785 32505856 bytes (33 MB, 31 MiB) copied, 4.68059 s, 6.9 MB/s 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:59.785 [2024-07-25 00:55:21.858343] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:59.785 00:55:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:59.785 [2024-07-25 00:55:22.022092] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:59.785 "name": "raid_bdev1", 00:28:59.785 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:28:59.785 "strip_size_kb": 0, 00:28:59.785 "state": "online", 00:28:59.785 "raid_level": "raid1", 00:28:59.785 "superblock": true, 00:28:59.785 "num_base_bdevs": 2, 00:28:59.785 "num_base_bdevs_discovered": 1, 00:28:59.785 "num_base_bdevs_operational": 1, 00:28:59.785 "base_bdevs_list": [ 00:28:59.785 { 00:28:59.785 "name": null, 00:28:59.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.785 "is_configured": false, 00:28:59.785 "data_offset": 2048, 00:28:59.785 "data_size": 63488 00:28:59.785 }, 00:28:59.785 { 00:28:59.785 "name": "BaseBdev2", 00:28:59.785 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:28:59.785 "is_configured": true, 00:28:59.785 "data_offset": 2048, 00:28:59.785 "data_size": 63488 00:28:59.785 } 00:28:59.785 ] 00:28:59.785 }' 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:59.785 00:55:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:00.353 00:55:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:00.628 [2024-07-25 00:55:23.050274] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:00.628 [2024-07-25 00:55:23.063859] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca30f0 00:29:00.628 [2024-07-25 00:55:23.065908] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:00.628 00:55:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:01.562 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:01.562 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:01.562 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:01.562 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:01.562 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:01.562 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.562 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.821 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:01.821 "name": "raid_bdev1", 00:29:01.821 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:01.821 "strip_size_kb": 0, 00:29:01.821 "state": "online", 00:29:01.821 "raid_level": "raid1", 00:29:01.821 "superblock": true, 00:29:01.821 "num_base_bdevs": 2, 00:29:01.821 "num_base_bdevs_discovered": 2, 00:29:01.821 "num_base_bdevs_operational": 2, 00:29:01.821 "process": { 00:29:01.821 "type": "rebuild", 00:29:01.821 "target": "spare", 00:29:01.821 "progress": { 00:29:01.821 "blocks": 24576, 00:29:01.821 "percent": 38 00:29:01.821 } 00:29:01.821 }, 00:29:01.821 "base_bdevs_list": [ 00:29:01.821 { 00:29:01.821 "name": "spare", 00:29:01.821 "uuid": "32d4c496-d7fe-5ac5-b8d7-aaa302cfbd5e", 00:29:01.821 "is_configured": true, 00:29:01.821 "data_offset": 2048, 00:29:01.821 "data_size": 63488 00:29:01.821 }, 00:29:01.821 { 00:29:01.821 "name": "BaseBdev2", 00:29:01.821 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:01.821 "is_configured": true, 00:29:01.821 "data_offset": 2048, 00:29:01.821 "data_size": 63488 00:29:01.821 } 00:29:01.821 ] 00:29:01.821 }' 00:29:01.821 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:01.821 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:01.821 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:01.821 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:01.821 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:02.081 [2024-07-25 00:55:24.623511] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:02.081 [2024-07-25 00:55:24.675070] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:02.081 [2024-07-25 00:55:24.675261] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:02.081 [2024-07-25 00:55:24.675309] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:02.081 [2024-07-25 00:55:24.675382] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.081 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.341 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:02.341 "name": "raid_bdev1", 00:29:02.341 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:02.341 "strip_size_kb": 0, 00:29:02.341 "state": "online", 00:29:02.341 "raid_level": "raid1", 00:29:02.341 "superblock": true, 00:29:02.341 "num_base_bdevs": 2, 00:29:02.341 "num_base_bdevs_discovered": 1, 00:29:02.341 "num_base_bdevs_operational": 1, 00:29:02.341 "base_bdevs_list": [ 00:29:02.341 { 00:29:02.341 "name": null, 00:29:02.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.341 "is_configured": false, 00:29:02.341 "data_offset": 2048, 00:29:02.341 "data_size": 63488 00:29:02.341 }, 00:29:02.341 { 00:29:02.341 "name": "BaseBdev2", 00:29:02.341 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:02.341 "is_configured": true, 00:29:02.341 "data_offset": 2048, 00:29:02.341 "data_size": 63488 00:29:02.341 } 00:29:02.341 ] 00:29:02.341 }' 00:29:02.341 00:55:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:02.341 00:55:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.909 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:02.909 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:02.909 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:02.909 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:02.909 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:02.909 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.909 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.167 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:03.167 "name": "raid_bdev1", 00:29:03.167 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:03.167 "strip_size_kb": 0, 00:29:03.167 "state": "online", 00:29:03.167 "raid_level": "raid1", 00:29:03.167 "superblock": true, 00:29:03.167 "num_base_bdevs": 2, 00:29:03.167 "num_base_bdevs_discovered": 1, 00:29:03.167 "num_base_bdevs_operational": 1, 00:29:03.167 "base_bdevs_list": [ 00:29:03.167 { 00:29:03.167 "name": null, 00:29:03.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.167 "is_configured": false, 00:29:03.167 "data_offset": 2048, 00:29:03.167 "data_size": 63488 00:29:03.167 }, 00:29:03.167 { 00:29:03.167 "name": "BaseBdev2", 00:29:03.167 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:03.167 "is_configured": true, 00:29:03.167 "data_offset": 2048, 00:29:03.167 "data_size": 63488 00:29:03.167 } 00:29:03.167 ] 00:29:03.167 }' 00:29:03.167 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:03.426 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:03.426 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:03.426 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:03.426 00:55:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:03.685 [2024-07-25 00:55:26.159818] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:03.685 [2024-07-25 00:55:26.175071] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:29:03.685 [2024-07-25 00:55:26.177094] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:03.685 00:55:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:04.621 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:04.621 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:04.621 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:04.621 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:04.621 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:04.621 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.621 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.880 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:04.880 "name": "raid_bdev1", 00:29:04.880 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:04.880 "strip_size_kb": 0, 00:29:04.880 "state": "online", 00:29:04.880 "raid_level": "raid1", 00:29:04.880 "superblock": true, 00:29:04.880 "num_base_bdevs": 2, 00:29:04.880 "num_base_bdevs_discovered": 2, 00:29:04.880 "num_base_bdevs_operational": 2, 00:29:04.880 "process": { 00:29:04.880 "type": "rebuild", 00:29:04.880 "target": "spare", 00:29:04.880 "progress": { 00:29:04.880 "blocks": 24576, 00:29:04.880 "percent": 38 00:29:04.880 } 00:29:04.880 }, 00:29:04.880 "base_bdevs_list": [ 00:29:04.880 { 00:29:04.880 "name": "spare", 00:29:04.880 "uuid": "32d4c496-d7fe-5ac5-b8d7-aaa302cfbd5e", 00:29:04.880 "is_configured": true, 00:29:04.880 "data_offset": 2048, 00:29:04.880 "data_size": 63488 00:29:04.880 }, 00:29:04.880 { 00:29:04.880 "name": "BaseBdev2", 00:29:04.881 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:04.881 "is_configured": true, 00:29:04.881 "data_offset": 2048, 00:29:04.881 "data_size": 63488 00:29:04.881 } 00:29:04.881 ] 00:29:04.881 }' 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:04.881 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=804 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.881 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.140 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:05.140 "name": "raid_bdev1", 00:29:05.140 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:05.140 "strip_size_kb": 0, 00:29:05.140 "state": "online", 00:29:05.140 "raid_level": "raid1", 00:29:05.140 "superblock": true, 00:29:05.140 "num_base_bdevs": 2, 00:29:05.140 "num_base_bdevs_discovered": 2, 00:29:05.140 "num_base_bdevs_operational": 2, 00:29:05.140 "process": { 00:29:05.140 "type": "rebuild", 00:29:05.140 "target": "spare", 00:29:05.140 "progress": { 00:29:05.140 "blocks": 30720, 00:29:05.140 "percent": 48 00:29:05.140 } 00:29:05.140 }, 00:29:05.140 "base_bdevs_list": [ 00:29:05.140 { 00:29:05.140 "name": "spare", 00:29:05.140 "uuid": "32d4c496-d7fe-5ac5-b8d7-aaa302cfbd5e", 00:29:05.140 "is_configured": true, 00:29:05.140 "data_offset": 2048, 00:29:05.140 "data_size": 63488 00:29:05.140 }, 00:29:05.140 { 00:29:05.140 "name": "BaseBdev2", 00:29:05.140 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:05.140 "is_configured": true, 00:29:05.140 "data_offset": 2048, 00:29:05.140 "data_size": 63488 00:29:05.140 } 00:29:05.140 ] 00:29:05.140 }' 00:29:05.140 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:05.400 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:05.400 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:05.400 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:05.400 00:55:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:06.337 00:55:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:06.337 00:55:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:06.337 00:55:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:06.337 00:55:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:06.337 00:55:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:06.337 00:55:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:06.337 00:55:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.337 00:55:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.596 00:55:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:06.596 "name": "raid_bdev1", 00:29:06.596 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:06.596 "strip_size_kb": 0, 00:29:06.596 "state": "online", 00:29:06.596 "raid_level": "raid1", 00:29:06.596 "superblock": true, 00:29:06.596 "num_base_bdevs": 2, 00:29:06.596 "num_base_bdevs_discovered": 2, 00:29:06.596 "num_base_bdevs_operational": 2, 00:29:06.596 "process": { 00:29:06.596 "type": "rebuild", 00:29:06.596 "target": "spare", 00:29:06.596 "progress": { 00:29:06.596 "blocks": 59392, 00:29:06.596 "percent": 93 00:29:06.596 } 00:29:06.596 }, 00:29:06.596 "base_bdevs_list": [ 00:29:06.596 { 00:29:06.596 "name": "spare", 00:29:06.596 "uuid": "32d4c496-d7fe-5ac5-b8d7-aaa302cfbd5e", 00:29:06.596 "is_configured": true, 00:29:06.596 "data_offset": 2048, 00:29:06.596 "data_size": 63488 00:29:06.596 }, 00:29:06.596 { 00:29:06.596 "name": "BaseBdev2", 00:29:06.596 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:06.596 "is_configured": true, 00:29:06.596 "data_offset": 2048, 00:29:06.596 "data_size": 63488 00:29:06.596 } 00:29:06.596 ] 00:29:06.596 }' 00:29:06.596 00:55:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:06.596 00:55:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:06.596 00:55:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:06.596 00:55:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:06.596 00:55:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:06.856 [2024-07-25 00:55:29.294629] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:06.856 [2024-07-25 00:55:29.294835] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:06.856 [2024-07-25 00:55:29.295042] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:07.792 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:07.792 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:07.792 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:07.792 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:07.792 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:07.792 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:07.792 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.792 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:08.051 "name": "raid_bdev1", 00:29:08.051 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:08.051 "strip_size_kb": 0, 00:29:08.051 "state": "online", 00:29:08.051 "raid_level": "raid1", 00:29:08.051 "superblock": true, 00:29:08.051 "num_base_bdevs": 2, 00:29:08.051 "num_base_bdevs_discovered": 2, 00:29:08.051 "num_base_bdevs_operational": 2, 00:29:08.051 "base_bdevs_list": [ 00:29:08.051 { 00:29:08.051 "name": "spare", 00:29:08.051 "uuid": "32d4c496-d7fe-5ac5-b8d7-aaa302cfbd5e", 00:29:08.051 "is_configured": true, 00:29:08.051 "data_offset": 2048, 00:29:08.051 "data_size": 63488 00:29:08.051 }, 00:29:08.051 { 00:29:08.051 "name": "BaseBdev2", 00:29:08.051 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:08.051 "is_configured": true, 00:29:08.051 "data_offset": 2048, 00:29:08.051 "data_size": 63488 00:29:08.051 } 00:29:08.051 ] 00:29:08.051 }' 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.051 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:08.311 "name": "raid_bdev1", 00:29:08.311 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:08.311 "strip_size_kb": 0, 00:29:08.311 "state": "online", 00:29:08.311 "raid_level": "raid1", 00:29:08.311 "superblock": true, 00:29:08.311 "num_base_bdevs": 2, 00:29:08.311 "num_base_bdevs_discovered": 2, 00:29:08.311 "num_base_bdevs_operational": 2, 00:29:08.311 "base_bdevs_list": [ 00:29:08.311 { 00:29:08.311 "name": "spare", 00:29:08.311 "uuid": "32d4c496-d7fe-5ac5-b8d7-aaa302cfbd5e", 00:29:08.311 "is_configured": true, 00:29:08.311 "data_offset": 2048, 00:29:08.311 "data_size": 63488 00:29:08.311 }, 00:29:08.311 { 00:29:08.311 "name": "BaseBdev2", 00:29:08.311 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:08.311 "is_configured": true, 00:29:08.311 "data_offset": 2048, 00:29:08.311 "data_size": 63488 00:29:08.311 } 00:29:08.311 ] 00:29:08.311 }' 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.311 00:55:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.571 00:55:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:08.571 "name": "raid_bdev1", 00:29:08.571 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:08.571 "strip_size_kb": 0, 00:29:08.571 "state": "online", 00:29:08.571 "raid_level": "raid1", 00:29:08.571 "superblock": true, 00:29:08.571 "num_base_bdevs": 2, 00:29:08.571 "num_base_bdevs_discovered": 2, 00:29:08.571 "num_base_bdevs_operational": 2, 00:29:08.571 "base_bdevs_list": [ 00:29:08.571 { 00:29:08.571 "name": "spare", 00:29:08.571 "uuid": "32d4c496-d7fe-5ac5-b8d7-aaa302cfbd5e", 00:29:08.571 "is_configured": true, 00:29:08.571 "data_offset": 2048, 00:29:08.571 "data_size": 63488 00:29:08.571 }, 00:29:08.571 { 00:29:08.571 "name": "BaseBdev2", 00:29:08.571 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:08.571 "is_configured": true, 00:29:08.571 "data_offset": 2048, 00:29:08.571 "data_size": 63488 00:29:08.571 } 00:29:08.571 ] 00:29:08.571 }' 00:29:08.571 00:55:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:08.571 00:55:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.138 00:55:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:09.396 [2024-07-25 00:55:31.889687] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:09.396 [2024-07-25 00:55:31.889918] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:09.396 [2024-07-25 00:55:31.890111] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:09.396 [2024-07-25 00:55:31.890284] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:09.396 [2024-07-25 00:55:31.890367] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:29:09.396 00:55:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:29:09.396 00:55:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:09.655 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:09.936 /dev/nbd0 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:09.936 1+0 records in 00:29:09.936 1+0 records out 00:29:09.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494796 s, 8.3 MB/s 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:09.936 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:10.202 /dev/nbd1 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:10.202 1+0 records in 00:29:10.202 1+0 records out 00:29:10.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578734 s, 7.1 MB/s 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:10.202 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:10.461 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:10.461 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:10.461 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:10.461 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:10.461 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:10.461 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:10.461 00:55:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:10.720 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:10.720 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:10.720 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:10.720 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:10.720 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:10.720 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:10.720 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:10.720 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:10.720 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:10.720 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:10.720 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:10.978 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:10.978 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:10.978 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:10.978 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:10.978 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:10.978 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:10.978 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:10.978 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:10.978 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:11.237 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:11.237 [2024-07-25 00:55:33.885317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:11.237 [2024-07-25 00:55:33.885398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.237 [2024-07-25 00:55:33.885468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:29:11.237 [2024-07-25 00:55:33.885488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.237 [2024-07-25 00:55:33.887762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.237 [2024-07-25 00:55:33.887814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:11.237 [2024-07-25 00:55:33.887939] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:11.237 [2024-07-25 00:55:33.888027] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:11.237 [2024-07-25 00:55:33.888161] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:11.496 spare 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.496 00:55:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:11.496 [2024-07-25 00:55:33.988250] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:29:11.496 [2024-07-25 00:55:33.988277] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:11.496 [2024-07-25 00:55:33.988432] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:29:11.496 [2024-07-25 00:55:33.988772] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:29:11.496 [2024-07-25 00:55:33.988792] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:29:11.496 [2024-07-25 00:55:33.988927] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:11.756 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:11.756 "name": "raid_bdev1", 00:29:11.756 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:11.756 "strip_size_kb": 0, 00:29:11.756 "state": "online", 00:29:11.756 "raid_level": "raid1", 00:29:11.756 "superblock": true, 00:29:11.756 "num_base_bdevs": 2, 00:29:11.756 "num_base_bdevs_discovered": 2, 00:29:11.756 "num_base_bdevs_operational": 2, 00:29:11.756 "base_bdevs_list": [ 00:29:11.756 { 00:29:11.756 "name": "spare", 00:29:11.756 "uuid": "32d4c496-d7fe-5ac5-b8d7-aaa302cfbd5e", 00:29:11.756 "is_configured": true, 00:29:11.756 "data_offset": 2048, 00:29:11.756 "data_size": 63488 00:29:11.756 }, 00:29:11.756 { 00:29:11.756 "name": "BaseBdev2", 00:29:11.756 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:11.756 "is_configured": true, 00:29:11.756 "data_offset": 2048, 00:29:11.756 "data_size": 63488 00:29:11.756 } 00:29:11.756 ] 00:29:11.756 }' 00:29:11.756 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:11.756 00:55:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:12.323 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:12.323 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:12.323 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:12.323 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:12.323 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:12.323 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.323 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.323 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:12.323 "name": "raid_bdev1", 00:29:12.323 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:12.323 "strip_size_kb": 0, 00:29:12.323 "state": "online", 00:29:12.323 "raid_level": "raid1", 00:29:12.323 "superblock": true, 00:29:12.323 "num_base_bdevs": 2, 00:29:12.323 "num_base_bdevs_discovered": 2, 00:29:12.323 "num_base_bdevs_operational": 2, 00:29:12.323 "base_bdevs_list": [ 00:29:12.323 { 00:29:12.323 "name": "spare", 00:29:12.323 "uuid": "32d4c496-d7fe-5ac5-b8d7-aaa302cfbd5e", 00:29:12.323 "is_configured": true, 00:29:12.323 "data_offset": 2048, 00:29:12.323 "data_size": 63488 00:29:12.323 }, 00:29:12.323 { 00:29:12.323 "name": "BaseBdev2", 00:29:12.323 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:12.323 "is_configured": true, 00:29:12.323 "data_offset": 2048, 00:29:12.323 "data_size": 63488 00:29:12.323 } 00:29:12.323 ] 00:29:12.323 }' 00:29:12.323 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:12.323 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:12.323 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:12.582 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:12.582 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.582 00:55:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:12.841 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:12.841 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:13.099 [2024-07-25 00:55:35.508951] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:13.099 "name": "raid_bdev1", 00:29:13.099 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:13.099 "strip_size_kb": 0, 00:29:13.099 "state": "online", 00:29:13.099 "raid_level": "raid1", 00:29:13.099 "superblock": true, 00:29:13.099 "num_base_bdevs": 2, 00:29:13.099 "num_base_bdevs_discovered": 1, 00:29:13.099 "num_base_bdevs_operational": 1, 00:29:13.099 "base_bdevs_list": [ 00:29:13.099 { 00:29:13.099 "name": null, 00:29:13.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.099 "is_configured": false, 00:29:13.099 "data_offset": 2048, 00:29:13.099 "data_size": 63488 00:29:13.099 }, 00:29:13.099 { 00:29:13.099 "name": "BaseBdev2", 00:29:13.099 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:13.099 "is_configured": true, 00:29:13.099 "data_offset": 2048, 00:29:13.099 "data_size": 63488 00:29:13.099 } 00:29:13.099 ] 00:29:13.099 }' 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:13.099 00:55:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.665 00:55:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:13.924 [2024-07-25 00:55:36.557172] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:13.924 [2024-07-25 00:55:36.557383] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:13.924 [2024-07-25 00:55:36.557396] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:13.924 [2024-07-25 00:55:36.557450] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:13.924 [2024-07-25 00:55:36.571740] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:29:13.924 [2024-07-25 00:55:36.573663] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:14.183 00:55:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:15.120 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:15.120 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:15.120 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:15.120 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:15.120 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:15.120 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.120 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.378 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:15.378 "name": "raid_bdev1", 00:29:15.378 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:15.378 "strip_size_kb": 0, 00:29:15.378 "state": "online", 00:29:15.378 "raid_level": "raid1", 00:29:15.378 "superblock": true, 00:29:15.378 "num_base_bdevs": 2, 00:29:15.378 "num_base_bdevs_discovered": 2, 00:29:15.378 "num_base_bdevs_operational": 2, 00:29:15.378 "process": { 00:29:15.378 "type": "rebuild", 00:29:15.378 "target": "spare", 00:29:15.378 "progress": { 00:29:15.378 "blocks": 24576, 00:29:15.378 "percent": 38 00:29:15.378 } 00:29:15.378 }, 00:29:15.378 "base_bdevs_list": [ 00:29:15.378 { 00:29:15.378 "name": "spare", 00:29:15.378 "uuid": "32d4c496-d7fe-5ac5-b8d7-aaa302cfbd5e", 00:29:15.378 "is_configured": true, 00:29:15.378 "data_offset": 2048, 00:29:15.378 "data_size": 63488 00:29:15.378 }, 00:29:15.378 { 00:29:15.378 "name": "BaseBdev2", 00:29:15.378 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:15.378 "is_configured": true, 00:29:15.378 "data_offset": 2048, 00:29:15.378 "data_size": 63488 00:29:15.378 } 00:29:15.378 ] 00:29:15.378 }' 00:29:15.378 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:15.378 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:15.378 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:15.378 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:15.378 00:55:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:15.637 [2024-07-25 00:55:38.152097] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:15.637 [2024-07-25 00:55:38.182708] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:15.637 [2024-07-25 00:55:38.182793] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:15.637 [2024-07-25 00:55:38.182808] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:15.637 [2024-07-25 00:55:38.182816] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.637 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.896 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:15.896 "name": "raid_bdev1", 00:29:15.896 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:15.896 "strip_size_kb": 0, 00:29:15.896 "state": "online", 00:29:15.896 "raid_level": "raid1", 00:29:15.896 "superblock": true, 00:29:15.896 "num_base_bdevs": 2, 00:29:15.896 "num_base_bdevs_discovered": 1, 00:29:15.896 "num_base_bdevs_operational": 1, 00:29:15.896 "base_bdevs_list": [ 00:29:15.896 { 00:29:15.896 "name": null, 00:29:15.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.896 "is_configured": false, 00:29:15.896 "data_offset": 2048, 00:29:15.896 "data_size": 63488 00:29:15.896 }, 00:29:15.896 { 00:29:15.896 "name": "BaseBdev2", 00:29:15.896 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:15.896 "is_configured": true, 00:29:15.896 "data_offset": 2048, 00:29:15.896 "data_size": 63488 00:29:15.896 } 00:29:15.896 ] 00:29:15.896 }' 00:29:15.896 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:15.896 00:55:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:16.463 00:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:16.722 [2024-07-25 00:55:39.186305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:16.722 [2024-07-25 00:55:39.186413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.722 [2024-07-25 00:55:39.186465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:29:16.722 [2024-07-25 00:55:39.186497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.722 [2024-07-25 00:55:39.186985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.722 [2024-07-25 00:55:39.187029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:16.722 [2024-07-25 00:55:39.187149] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:16.722 [2024-07-25 00:55:39.187161] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:16.722 [2024-07-25 00:55:39.187169] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:16.722 [2024-07-25 00:55:39.187203] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:16.722 [2024-07-25 00:55:39.201304] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:29:16.722 spare 00:29:16.722 [2024-07-25 00:55:39.203185] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:16.722 00:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:17.659 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:17.659 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:17.659 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:17.659 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:17.659 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:17.659 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.659 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.918 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:17.918 "name": "raid_bdev1", 00:29:17.918 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:17.918 "strip_size_kb": 0, 00:29:17.918 "state": "online", 00:29:17.918 "raid_level": "raid1", 00:29:17.918 "superblock": true, 00:29:17.918 "num_base_bdevs": 2, 00:29:17.918 "num_base_bdevs_discovered": 2, 00:29:17.918 "num_base_bdevs_operational": 2, 00:29:17.918 "process": { 00:29:17.918 "type": "rebuild", 00:29:17.918 "target": "spare", 00:29:17.918 "progress": { 00:29:17.918 "blocks": 24576, 00:29:17.918 "percent": 38 00:29:17.918 } 00:29:17.918 }, 00:29:17.918 "base_bdevs_list": [ 00:29:17.918 { 00:29:17.918 "name": "spare", 00:29:17.918 "uuid": "32d4c496-d7fe-5ac5-b8d7-aaa302cfbd5e", 00:29:17.918 "is_configured": true, 00:29:17.918 "data_offset": 2048, 00:29:17.918 "data_size": 63488 00:29:17.918 }, 00:29:17.918 { 00:29:17.918 "name": "BaseBdev2", 00:29:17.918 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:17.918 "is_configured": true, 00:29:17.918 "data_offset": 2048, 00:29:17.918 "data_size": 63488 00:29:17.918 } 00:29:17.918 ] 00:29:17.918 }' 00:29:17.918 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:17.918 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:17.918 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:17.918 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:17.918 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:18.178 [2024-07-25 00:55:40.769107] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:18.178 [2024-07-25 00:55:40.812088] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:18.178 [2024-07-25 00:55:40.812171] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:18.178 [2024-07-25 00:55:40.812186] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:18.178 [2024-07-25 00:55:40.812194] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.438 00:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.697 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:18.697 "name": "raid_bdev1", 00:29:18.697 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:18.697 "strip_size_kb": 0, 00:29:18.697 "state": "online", 00:29:18.697 "raid_level": "raid1", 00:29:18.697 "superblock": true, 00:29:18.697 "num_base_bdevs": 2, 00:29:18.697 "num_base_bdevs_discovered": 1, 00:29:18.697 "num_base_bdevs_operational": 1, 00:29:18.697 "base_bdevs_list": [ 00:29:18.697 { 00:29:18.697 "name": null, 00:29:18.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.697 "is_configured": false, 00:29:18.697 "data_offset": 2048, 00:29:18.697 "data_size": 63488 00:29:18.697 }, 00:29:18.697 { 00:29:18.697 "name": "BaseBdev2", 00:29:18.697 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:18.697 "is_configured": true, 00:29:18.697 "data_offset": 2048, 00:29:18.697 "data_size": 63488 00:29:18.697 } 00:29:18.697 ] 00:29:18.697 }' 00:29:18.697 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:18.697 00:55:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:19.284 "name": "raid_bdev1", 00:29:19.284 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:19.284 "strip_size_kb": 0, 00:29:19.284 "state": "online", 00:29:19.284 "raid_level": "raid1", 00:29:19.284 "superblock": true, 00:29:19.284 "num_base_bdevs": 2, 00:29:19.284 "num_base_bdevs_discovered": 1, 00:29:19.284 "num_base_bdevs_operational": 1, 00:29:19.284 "base_bdevs_list": [ 00:29:19.284 { 00:29:19.284 "name": null, 00:29:19.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:19.284 "is_configured": false, 00:29:19.284 "data_offset": 2048, 00:29:19.284 "data_size": 63488 00:29:19.284 }, 00:29:19.284 { 00:29:19.284 "name": "BaseBdev2", 00:29:19.284 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:19.284 "is_configured": true, 00:29:19.284 "data_offset": 2048, 00:29:19.284 "data_size": 63488 00:29:19.284 } 00:29:19.284 ] 00:29:19.284 }' 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:19.284 00:55:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:19.852 00:55:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:19.852 [2024-07-25 00:55:42.468507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:19.852 [2024-07-25 00:55:42.468581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:19.852 [2024-07-25 00:55:42.468631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:19.852 [2024-07-25 00:55:42.468654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:19.852 [2024-07-25 00:55:42.469077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:19.852 [2024-07-25 00:55:42.469114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:19.852 [2024-07-25 00:55:42.469248] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:19.852 [2024-07-25 00:55:42.469261] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:19.852 [2024-07-25 00:55:42.469267] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:19.852 BaseBdev1 00:29:19.852 00:55:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:21.230 "name": "raid_bdev1", 00:29:21.230 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:21.230 "strip_size_kb": 0, 00:29:21.230 "state": "online", 00:29:21.230 "raid_level": "raid1", 00:29:21.230 "superblock": true, 00:29:21.230 "num_base_bdevs": 2, 00:29:21.230 "num_base_bdevs_discovered": 1, 00:29:21.230 "num_base_bdevs_operational": 1, 00:29:21.230 "base_bdevs_list": [ 00:29:21.230 { 00:29:21.230 "name": null, 00:29:21.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.230 "is_configured": false, 00:29:21.230 "data_offset": 2048, 00:29:21.230 "data_size": 63488 00:29:21.230 }, 00:29:21.230 { 00:29:21.230 "name": "BaseBdev2", 00:29:21.230 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:21.230 "is_configured": true, 00:29:21.230 "data_offset": 2048, 00:29:21.230 "data_size": 63488 00:29:21.230 } 00:29:21.230 ] 00:29:21.230 }' 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:21.230 00:55:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.798 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:21.798 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:21.798 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:21.798 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:21.798 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:21.798 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.798 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.057 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:22.057 "name": "raid_bdev1", 00:29:22.057 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:22.057 "strip_size_kb": 0, 00:29:22.057 "state": "online", 00:29:22.057 "raid_level": "raid1", 00:29:22.057 "superblock": true, 00:29:22.057 "num_base_bdevs": 2, 00:29:22.057 "num_base_bdevs_discovered": 1, 00:29:22.057 "num_base_bdevs_operational": 1, 00:29:22.057 "base_bdevs_list": [ 00:29:22.057 { 00:29:22.057 "name": null, 00:29:22.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.057 "is_configured": false, 00:29:22.057 "data_offset": 2048, 00:29:22.057 "data_size": 63488 00:29:22.057 }, 00:29:22.057 { 00:29:22.057 "name": "BaseBdev2", 00:29:22.057 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:22.057 "is_configured": true, 00:29:22.057 "data_offset": 2048, 00:29:22.057 "data_size": 63488 00:29:22.057 } 00:29:22.057 ] 00:29:22.057 }' 00:29:22.057 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:22.057 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:22.057 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:22.317 [2024-07-25 00:55:44.928644] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:22.317 [2024-07-25 00:55:44.928802] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:22.317 [2024-07-25 00:55:44.928813] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:22.317 request: 00:29:22.317 { 00:29:22.317 "base_bdev": "BaseBdev1", 00:29:22.317 "raid_bdev": "raid_bdev1", 00:29:22.317 "method": "bdev_raid_add_base_bdev", 00:29:22.317 "req_id": 1 00:29:22.317 } 00:29:22.317 Got JSON-RPC error response 00:29:22.317 response: 00:29:22.317 { 00:29:22.317 "code": -22, 00:29:22.317 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:22.317 } 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:22.317 00:55:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:29:23.693 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:23.693 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:23.693 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:23.693 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:23.693 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:23.693 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:23.694 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:23.694 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:23.694 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:23.694 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:23.694 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.694 00:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.694 00:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:23.694 "name": "raid_bdev1", 00:29:23.694 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:23.694 "strip_size_kb": 0, 00:29:23.694 "state": "online", 00:29:23.694 "raid_level": "raid1", 00:29:23.694 "superblock": true, 00:29:23.694 "num_base_bdevs": 2, 00:29:23.694 "num_base_bdevs_discovered": 1, 00:29:23.694 "num_base_bdevs_operational": 1, 00:29:23.694 "base_bdevs_list": [ 00:29:23.694 { 00:29:23.694 "name": null, 00:29:23.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.694 "is_configured": false, 00:29:23.694 "data_offset": 2048, 00:29:23.694 "data_size": 63488 00:29:23.694 }, 00:29:23.694 { 00:29:23.694 "name": "BaseBdev2", 00:29:23.694 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:23.694 "is_configured": true, 00:29:23.694 "data_offset": 2048, 00:29:23.694 "data_size": 63488 00:29:23.694 } 00:29:23.694 ] 00:29:23.694 }' 00:29:23.694 00:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:23.694 00:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.262 00:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:24.262 00:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:24.262 00:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:24.262 00:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:24.262 00:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:24.262 00:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.262 00:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.522 00:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:24.522 "name": "raid_bdev1", 00:29:24.522 "uuid": "54ee90e4-dd06-489b-870a-c74e298edb0c", 00:29:24.522 "strip_size_kb": 0, 00:29:24.522 "state": "online", 00:29:24.522 "raid_level": "raid1", 00:29:24.522 "superblock": true, 00:29:24.522 "num_base_bdevs": 2, 00:29:24.522 "num_base_bdevs_discovered": 1, 00:29:24.522 "num_base_bdevs_operational": 1, 00:29:24.522 "base_bdevs_list": [ 00:29:24.522 { 00:29:24.522 "name": null, 00:29:24.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.522 "is_configured": false, 00:29:24.522 "data_offset": 2048, 00:29:24.522 "data_size": 63488 00:29:24.522 }, 00:29:24.522 { 00:29:24.522 "name": "BaseBdev2", 00:29:24.522 "uuid": "9fe0e716-1b32-5ee8-99ce-8a87a7df1048", 00:29:24.522 "is_configured": true, 00:29:24.522 "data_offset": 2048, 00:29:24.522 "data_size": 63488 00:29:24.522 } 00:29:24.522 ] 00:29:24.522 }' 00:29:24.522 00:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:24.522 00:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:24.522 00:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:24.522 00:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:24.522 00:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 145156 00:29:24.522 00:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 145156 ']' 00:29:24.522 00:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 145156 00:29:24.522 00:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:29:24.522 00:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:24.522 00:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 145156 00:29:24.522 killing process with pid 145156 00:29:24.522 Received shutdown signal, test time was about 60.000000 seconds 00:29:24.522 00:29:24.522 Latency(us) 00:29:24.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.522 =================================================================================================================== 00:29:24.522 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:24.523 00:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:24.523 00:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:24.523 00:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 145156' 00:29:24.523 00:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 145156 00:29:24.523 00:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 145156 00:29:24.523 [2024-07-25 00:55:47.098910] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:24.523 [2024-07-25 00:55:47.099019] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:24.523 [2024-07-25 00:55:47.099068] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:24.523 [2024-07-25 00:55:47.099082] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:29:24.781 [2024-07-25 00:55:47.371080] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:29:26.160 00:29:26.160 real 0m35.832s 00:29:26.160 user 0m52.242s 00:29:26.160 sys 0m5.842s 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:26.160 ************************************ 00:29:26.160 END TEST raid_rebuild_test_sb 00:29:26.160 ************************************ 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.160 00:55:48 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:29:26.160 00:55:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:26.160 00:55:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.160 00:55:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:26.160 ************************************ 00:29:26.160 START TEST raid_rebuild_test_io 00:29:26.160 ************************************ 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:26.160 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=146092 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 146092 /var/tmp/spdk-raid.sock 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 146092 ']' 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:26.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:26.161 00:55:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.161 [2024-07-25 00:55:48.718057] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:29:26.161 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:26.161 Zero copy mechanism will not be used. 00:29:26.161 [2024-07-25 00:55:48.718284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146092 ] 00:29:26.419 [2024-07-25 00:55:48.898459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.677 [2024-07-25 00:55:49.081174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.677 [2024-07-25 00:55:49.271950] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:27.245 00:55:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:27.245 00:55:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:29:27.245 00:55:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:27.245 00:55:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:27.245 BaseBdev1_malloc 00:29:27.245 00:55:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:27.504 [2024-07-25 00:55:50.105481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:27.504 [2024-07-25 00:55:50.105577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:27.504 [2024-07-25 00:55:50.105615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:29:27.504 [2024-07-25 00:55:50.105635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:27.504 [2024-07-25 00:55:50.108007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:27.504 [2024-07-25 00:55:50.108069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:27.504 BaseBdev1 00:29:27.504 00:55:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:27.504 00:55:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:27.797 BaseBdev2_malloc 00:29:27.797 00:55:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:28.055 [2024-07-25 00:55:50.603817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:28.055 [2024-07-25 00:55:50.603919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:28.055 [2024-07-25 00:55:50.603956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:29:28.055 [2024-07-25 00:55:50.603975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:28.055 [2024-07-25 00:55:50.606181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:28.055 [2024-07-25 00:55:50.606245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:28.055 BaseBdev2 00:29:28.055 00:55:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:28.314 spare_malloc 00:29:28.314 00:55:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:28.579 spare_delay 00:29:28.579 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:28.579 [2024-07-25 00:55:51.213814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:28.579 [2024-07-25 00:55:51.213908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:28.579 [2024-07-25 00:55:51.213944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:28.579 [2024-07-25 00:55:51.213973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:28.579 [2024-07-25 00:55:51.216231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:28.579 [2024-07-25 00:55:51.216301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:28.579 spare 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:29:28.837 [2024-07-25 00:55:51.393896] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:28.837 [2024-07-25 00:55:51.395841] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:28.837 [2024-07-25 00:55:51.396072] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:29:28.837 [2024-07-25 00:55:51.396115] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:28.837 [2024-07-25 00:55:51.396324] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:28.837 [2024-07-25 00:55:51.396732] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:29:28.837 [2024-07-25 00:55:51.396842] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:29:28.837 [2024-07-25 00:55:51.397070] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.837 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.096 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:29.096 "name": "raid_bdev1", 00:29:29.096 "uuid": "66fd3693-cfe9-4a26-89ed-6494ba0b0c37", 00:29:29.096 "strip_size_kb": 0, 00:29:29.096 "state": "online", 00:29:29.096 "raid_level": "raid1", 00:29:29.096 "superblock": false, 00:29:29.096 "num_base_bdevs": 2, 00:29:29.096 "num_base_bdevs_discovered": 2, 00:29:29.096 "num_base_bdevs_operational": 2, 00:29:29.096 "base_bdevs_list": [ 00:29:29.096 { 00:29:29.096 "name": "BaseBdev1", 00:29:29.096 "uuid": "10b1c450-ef42-5640-a510-c1d7d04968a1", 00:29:29.096 "is_configured": true, 00:29:29.096 "data_offset": 0, 00:29:29.096 "data_size": 65536 00:29:29.096 }, 00:29:29.096 { 00:29:29.096 "name": "BaseBdev2", 00:29:29.096 "uuid": "3abd3d48-f5fe-5489-a21e-cada6c0468f3", 00:29:29.096 "is_configured": true, 00:29:29.096 "data_offset": 0, 00:29:29.096 "data_size": 65536 00:29:29.096 } 00:29:29.096 ] 00:29:29.096 }' 00:29:29.096 00:55:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:29.096 00:55:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:29.664 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:29.664 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:29.922 [2024-07-25 00:55:52.414283] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:29.922 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:29:29.922 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.922 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:30.181 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:30.181 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:30.181 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:30.181 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:30.181 [2024-07-25 00:55:52.782142] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:30.181 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:30.181 Zero copy mechanism will not be used. 00:29:30.181 Running I/O for 60 seconds... 00:29:30.439 [2024-07-25 00:55:52.865067] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:30.439 [2024-07-25 00:55:52.865519] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:30.439 00:55:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.698 00:55:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:30.698 "name": "raid_bdev1", 00:29:30.698 "uuid": "66fd3693-cfe9-4a26-89ed-6494ba0b0c37", 00:29:30.698 "strip_size_kb": 0, 00:29:30.698 "state": "online", 00:29:30.698 "raid_level": "raid1", 00:29:30.698 "superblock": false, 00:29:30.698 "num_base_bdevs": 2, 00:29:30.698 "num_base_bdevs_discovered": 1, 00:29:30.698 "num_base_bdevs_operational": 1, 00:29:30.698 "base_bdevs_list": [ 00:29:30.698 { 00:29:30.698 "name": null, 00:29:30.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:30.698 "is_configured": false, 00:29:30.698 "data_offset": 0, 00:29:30.698 "data_size": 65536 00:29:30.698 }, 00:29:30.698 { 00:29:30.698 "name": "BaseBdev2", 00:29:30.698 "uuid": "3abd3d48-f5fe-5489-a21e-cada6c0468f3", 00:29:30.698 "is_configured": true, 00:29:30.698 "data_offset": 0, 00:29:30.698 "data_size": 65536 00:29:30.698 } 00:29:30.698 ] 00:29:30.698 }' 00:29:30.698 00:55:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:30.698 00:55:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:31.266 00:55:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:31.524 [2024-07-25 00:55:54.056507] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:31.524 00:55:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:31.524 [2024-07-25 00:55:54.117424] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:31.525 [2024-07-25 00:55:54.119505] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:31.783 [2024-07-25 00:55:54.233386] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:31.783 [2024-07-25 00:55:54.234009] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:32.041 [2024-07-25 00:55:54.455735] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:32.041 [2024-07-25 00:55:54.456212] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:32.300 [2024-07-25 00:55:54.785908] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:32.300 [2024-07-25 00:55:54.903620] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:32.558 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:32.558 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:32.558 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:32.558 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:32.558 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:32.558 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.558 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.558 [2024-07-25 00:55:55.164547] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:32.817 [2024-07-25 00:55:55.273849] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:32.817 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:32.817 "name": "raid_bdev1", 00:29:32.817 "uuid": "66fd3693-cfe9-4a26-89ed-6494ba0b0c37", 00:29:32.817 "strip_size_kb": 0, 00:29:32.817 "state": "online", 00:29:32.817 "raid_level": "raid1", 00:29:32.817 "superblock": false, 00:29:32.817 "num_base_bdevs": 2, 00:29:32.817 "num_base_bdevs_discovered": 2, 00:29:32.817 "num_base_bdevs_operational": 2, 00:29:32.817 "process": { 00:29:32.817 "type": "rebuild", 00:29:32.817 "target": "spare", 00:29:32.817 "progress": { 00:29:32.817 "blocks": 16384, 00:29:32.817 "percent": 25 00:29:32.817 } 00:29:32.817 }, 00:29:32.817 "base_bdevs_list": [ 00:29:32.817 { 00:29:32.817 "name": "spare", 00:29:32.817 "uuid": "8db12d34-7ffb-58e6-b82f-3cf4cac3be58", 00:29:32.817 "is_configured": true, 00:29:32.817 "data_offset": 0, 00:29:32.817 "data_size": 65536 00:29:32.817 }, 00:29:32.817 { 00:29:32.817 "name": "BaseBdev2", 00:29:32.817 "uuid": "3abd3d48-f5fe-5489-a21e-cada6c0468f3", 00:29:32.817 "is_configured": true, 00:29:32.817 "data_offset": 0, 00:29:32.817 "data_size": 65536 00:29:32.817 } 00:29:32.817 ] 00:29:32.817 }' 00:29:32.817 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:32.817 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:32.817 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:32.817 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:32.817 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:33.077 [2024-07-25 00:55:55.616697] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:33.077 [2024-07-25 00:55:55.648780] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:33.336 [2024-07-25 00:55:55.829888] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:33.336 [2024-07-25 00:55:55.832105] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:33.336 [2024-07-25 00:55:55.832268] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:33.336 [2024-07-25 00:55:55.832306] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:33.336 [2024-07-25 00:55:55.880641] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.336 00:55:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.595 00:55:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:33.595 "name": "raid_bdev1", 00:29:33.595 "uuid": "66fd3693-cfe9-4a26-89ed-6494ba0b0c37", 00:29:33.595 "strip_size_kb": 0, 00:29:33.595 "state": "online", 00:29:33.595 "raid_level": "raid1", 00:29:33.595 "superblock": false, 00:29:33.595 "num_base_bdevs": 2, 00:29:33.595 "num_base_bdevs_discovered": 1, 00:29:33.595 "num_base_bdevs_operational": 1, 00:29:33.595 "base_bdevs_list": [ 00:29:33.595 { 00:29:33.595 "name": null, 00:29:33.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:33.595 "is_configured": false, 00:29:33.595 "data_offset": 0, 00:29:33.595 "data_size": 65536 00:29:33.595 }, 00:29:33.595 { 00:29:33.595 "name": "BaseBdev2", 00:29:33.595 "uuid": "3abd3d48-f5fe-5489-a21e-cada6c0468f3", 00:29:33.595 "is_configured": true, 00:29:33.595 "data_offset": 0, 00:29:33.595 "data_size": 65536 00:29:33.595 } 00:29:33.595 ] 00:29:33.595 }' 00:29:33.595 00:55:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:33.595 00:55:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:34.164 00:55:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:34.164 00:55:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:34.164 00:55:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:34.164 00:55:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:34.164 00:55:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:34.423 00:55:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.423 00:55:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.683 00:55:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:34.683 "name": "raid_bdev1", 00:29:34.683 "uuid": "66fd3693-cfe9-4a26-89ed-6494ba0b0c37", 00:29:34.683 "strip_size_kb": 0, 00:29:34.683 "state": "online", 00:29:34.683 "raid_level": "raid1", 00:29:34.683 "superblock": false, 00:29:34.683 "num_base_bdevs": 2, 00:29:34.683 "num_base_bdevs_discovered": 1, 00:29:34.683 "num_base_bdevs_operational": 1, 00:29:34.683 "base_bdevs_list": [ 00:29:34.683 { 00:29:34.683 "name": null, 00:29:34.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:34.683 "is_configured": false, 00:29:34.683 "data_offset": 0, 00:29:34.683 "data_size": 65536 00:29:34.683 }, 00:29:34.683 { 00:29:34.683 "name": "BaseBdev2", 00:29:34.683 "uuid": "3abd3d48-f5fe-5489-a21e-cada6c0468f3", 00:29:34.683 "is_configured": true, 00:29:34.683 "data_offset": 0, 00:29:34.683 "data_size": 65536 00:29:34.683 } 00:29:34.683 ] 00:29:34.683 }' 00:29:34.683 00:55:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:34.683 00:55:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:34.683 00:55:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:34.683 00:55:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:34.683 00:55:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:34.941 [2024-07-25 00:55:57.426685] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:34.941 00:55:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:34.942 [2024-07-25 00:55:57.482192] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:34.942 [2024-07-25 00:55:57.483981] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:34.942 [2024-07-25 00:55:57.592203] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:34.942 [2024-07-25 00:55:57.592817] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:35.200 [2024-07-25 00:55:57.800392] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:35.200 [2024-07-25 00:55:57.800798] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:35.768 [2024-07-25 00:55:58.148772] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:35.768 [2024-07-25 00:55:58.149488] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:35.768 [2024-07-25 00:55:58.376151] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:35.768 [2024-07-25 00:55:58.376575] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:36.027 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:36.027 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:36.027 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:36.027 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:36.027 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:36.027 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.027 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:36.286 "name": "raid_bdev1", 00:29:36.286 "uuid": "66fd3693-cfe9-4a26-89ed-6494ba0b0c37", 00:29:36.286 "strip_size_kb": 0, 00:29:36.286 "state": "online", 00:29:36.286 "raid_level": "raid1", 00:29:36.286 "superblock": false, 00:29:36.286 "num_base_bdevs": 2, 00:29:36.286 "num_base_bdevs_discovered": 2, 00:29:36.286 "num_base_bdevs_operational": 2, 00:29:36.286 "process": { 00:29:36.286 "type": "rebuild", 00:29:36.286 "target": "spare", 00:29:36.286 "progress": { 00:29:36.286 "blocks": 14336, 00:29:36.286 "percent": 21 00:29:36.286 } 00:29:36.286 }, 00:29:36.286 "base_bdevs_list": [ 00:29:36.286 { 00:29:36.286 "name": "spare", 00:29:36.286 "uuid": "8db12d34-7ffb-58e6-b82f-3cf4cac3be58", 00:29:36.286 "is_configured": true, 00:29:36.286 "data_offset": 0, 00:29:36.286 "data_size": 65536 00:29:36.286 }, 00:29:36.286 { 00:29:36.286 "name": "BaseBdev2", 00:29:36.286 "uuid": "3abd3d48-f5fe-5489-a21e-cada6c0468f3", 00:29:36.286 "is_configured": true, 00:29:36.286 "data_offset": 0, 00:29:36.286 "data_size": 65536 00:29:36.286 } 00:29:36.286 ] 00:29:36.286 }' 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=835 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.286 00:55:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.545 00:55:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:36.545 "name": "raid_bdev1", 00:29:36.545 "uuid": "66fd3693-cfe9-4a26-89ed-6494ba0b0c37", 00:29:36.545 "strip_size_kb": 0, 00:29:36.545 "state": "online", 00:29:36.545 "raid_level": "raid1", 00:29:36.545 "superblock": false, 00:29:36.545 "num_base_bdevs": 2, 00:29:36.545 "num_base_bdevs_discovered": 2, 00:29:36.545 "num_base_bdevs_operational": 2, 00:29:36.545 "process": { 00:29:36.545 "type": "rebuild", 00:29:36.545 "target": "spare", 00:29:36.545 "progress": { 00:29:36.545 "blocks": 20480, 00:29:36.545 "percent": 31 00:29:36.545 } 00:29:36.545 }, 00:29:36.545 "base_bdevs_list": [ 00:29:36.545 { 00:29:36.545 "name": "spare", 00:29:36.545 "uuid": "8db12d34-7ffb-58e6-b82f-3cf4cac3be58", 00:29:36.545 "is_configured": true, 00:29:36.545 "data_offset": 0, 00:29:36.545 "data_size": 65536 00:29:36.545 }, 00:29:36.545 { 00:29:36.545 "name": "BaseBdev2", 00:29:36.545 "uuid": "3abd3d48-f5fe-5489-a21e-cada6c0468f3", 00:29:36.545 "is_configured": true, 00:29:36.545 "data_offset": 0, 00:29:36.545 "data_size": 65536 00:29:36.545 } 00:29:36.545 ] 00:29:36.545 }' 00:29:36.545 00:55:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:36.545 00:55:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:36.545 00:55:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:36.545 00:55:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:36.545 00:55:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:37.113 [2024-07-25 00:55:59.600458] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:37.372 [2024-07-25 00:55:59.826868] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:37.630 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:37.630 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:37.630 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:37.630 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:37.630 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:37.630 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:37.630 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.630 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.889 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:37.889 "name": "raid_bdev1", 00:29:37.889 "uuid": "66fd3693-cfe9-4a26-89ed-6494ba0b0c37", 00:29:37.889 "strip_size_kb": 0, 00:29:37.889 "state": "online", 00:29:37.889 "raid_level": "raid1", 00:29:37.889 "superblock": false, 00:29:37.889 "num_base_bdevs": 2, 00:29:37.889 "num_base_bdevs_discovered": 2, 00:29:37.889 "num_base_bdevs_operational": 2, 00:29:37.889 "process": { 00:29:37.889 "type": "rebuild", 00:29:37.889 "target": "spare", 00:29:37.889 "progress": { 00:29:37.889 "blocks": 40960, 00:29:37.889 "percent": 62 00:29:37.889 } 00:29:37.889 }, 00:29:37.889 "base_bdevs_list": [ 00:29:37.889 { 00:29:37.889 "name": "spare", 00:29:37.889 "uuid": "8db12d34-7ffb-58e6-b82f-3cf4cac3be58", 00:29:37.889 "is_configured": true, 00:29:37.889 "data_offset": 0, 00:29:37.889 "data_size": 65536 00:29:37.889 }, 00:29:37.889 { 00:29:37.889 "name": "BaseBdev2", 00:29:37.889 "uuid": "3abd3d48-f5fe-5489-a21e-cada6c0468f3", 00:29:37.889 "is_configured": true, 00:29:37.889 "data_offset": 0, 00:29:37.889 "data_size": 65536 00:29:37.889 } 00:29:37.889 ] 00:29:37.889 }' 00:29:37.889 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:37.889 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:37.889 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:37.889 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:37.889 00:56:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:38.503 [2024-07-25 00:56:00.933845] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:29:39.070 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:39.070 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:39.070 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:39.070 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:39.070 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:39.070 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:39.070 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.070 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.070 [2024-07-25 00:56:01.591783] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:39.070 [2024-07-25 00:56:01.691847] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:39.070 [2024-07-25 00:56:01.700143] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:39.329 "name": "raid_bdev1", 00:29:39.329 "uuid": "66fd3693-cfe9-4a26-89ed-6494ba0b0c37", 00:29:39.329 "strip_size_kb": 0, 00:29:39.329 "state": "online", 00:29:39.329 "raid_level": "raid1", 00:29:39.329 "superblock": false, 00:29:39.329 "num_base_bdevs": 2, 00:29:39.329 "num_base_bdevs_discovered": 2, 00:29:39.329 "num_base_bdevs_operational": 2, 00:29:39.329 "base_bdevs_list": [ 00:29:39.329 { 00:29:39.329 "name": "spare", 00:29:39.329 "uuid": "8db12d34-7ffb-58e6-b82f-3cf4cac3be58", 00:29:39.329 "is_configured": true, 00:29:39.329 "data_offset": 0, 00:29:39.329 "data_size": 65536 00:29:39.329 }, 00:29:39.329 { 00:29:39.329 "name": "BaseBdev2", 00:29:39.329 "uuid": "3abd3d48-f5fe-5489-a21e-cada6c0468f3", 00:29:39.329 "is_configured": true, 00:29:39.329 "data_offset": 0, 00:29:39.329 "data_size": 65536 00:29:39.329 } 00:29:39.329 ] 00:29:39.329 }' 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.329 00:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:39.589 "name": "raid_bdev1", 00:29:39.589 "uuid": "66fd3693-cfe9-4a26-89ed-6494ba0b0c37", 00:29:39.589 "strip_size_kb": 0, 00:29:39.589 "state": "online", 00:29:39.589 "raid_level": "raid1", 00:29:39.589 "superblock": false, 00:29:39.589 "num_base_bdevs": 2, 00:29:39.589 "num_base_bdevs_discovered": 2, 00:29:39.589 "num_base_bdevs_operational": 2, 00:29:39.589 "base_bdevs_list": [ 00:29:39.589 { 00:29:39.589 "name": "spare", 00:29:39.589 "uuid": "8db12d34-7ffb-58e6-b82f-3cf4cac3be58", 00:29:39.589 "is_configured": true, 00:29:39.589 "data_offset": 0, 00:29:39.589 "data_size": 65536 00:29:39.589 }, 00:29:39.589 { 00:29:39.589 "name": "BaseBdev2", 00:29:39.589 "uuid": "3abd3d48-f5fe-5489-a21e-cada6c0468f3", 00:29:39.589 "is_configured": true, 00:29:39.589 "data_offset": 0, 00:29:39.589 "data_size": 65536 00:29:39.589 } 00:29:39.589 ] 00:29:39.589 }' 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.589 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.848 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:39.848 "name": "raid_bdev1", 00:29:39.848 "uuid": "66fd3693-cfe9-4a26-89ed-6494ba0b0c37", 00:29:39.848 "strip_size_kb": 0, 00:29:39.848 "state": "online", 00:29:39.848 "raid_level": "raid1", 00:29:39.848 "superblock": false, 00:29:39.848 "num_base_bdevs": 2, 00:29:39.848 "num_base_bdevs_discovered": 2, 00:29:39.848 "num_base_bdevs_operational": 2, 00:29:39.848 "base_bdevs_list": [ 00:29:39.848 { 00:29:39.848 "name": "spare", 00:29:39.848 "uuid": "8db12d34-7ffb-58e6-b82f-3cf4cac3be58", 00:29:39.848 "is_configured": true, 00:29:39.848 "data_offset": 0, 00:29:39.848 "data_size": 65536 00:29:39.848 }, 00:29:39.849 { 00:29:39.849 "name": "BaseBdev2", 00:29:39.849 "uuid": "3abd3d48-f5fe-5489-a21e-cada6c0468f3", 00:29:39.849 "is_configured": true, 00:29:39.849 "data_offset": 0, 00:29:39.849 "data_size": 65536 00:29:39.849 } 00:29:39.849 ] 00:29:39.849 }' 00:29:39.849 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:39.849 00:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:40.416 00:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:40.675 [2024-07-25 00:56:03.152907] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:40.675 [2024-07-25 00:56:03.153172] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:40.675 00:29:40.675 Latency(us) 00:29:40.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.675 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:40.675 raid_bdev1 : 10.42 129.51 388.52 0.00 0.00 10935.84 317.93 113845.39 00:29:40.675 =================================================================================================================== 00:29:40.675 Total : 129.51 388.52 0.00 0.00 10935.84 317.93 113845.39 00:29:40.675 [2024-07-25 00:56:03.228923] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:40.675 [2024-07-25 00:56:03.229073] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:40.675 [2024-07-25 00:56:03.229191] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:40.675 [2024-07-25 00:56:03.229392] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:29:40.675 0 00:29:40.675 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:40.675 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:40.935 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:41.193 /dev/nbd0 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:41.193 1+0 records in 00:29:41.193 1+0 records out 00:29:41.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519676 s, 7.9 MB/s 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:41.193 00:56:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:29:41.761 /dev/nbd1 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:41.761 1+0 records in 00:29:41.761 1+0 records out 00:29:41.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552958 s, 7.4 MB/s 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:41.761 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 146092 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 146092 ']' 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 146092 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146092 00:29:42.329 killing process with pid 146092 00:29:42.329 Received shutdown signal, test time was about 12.138378 seconds 00:29:42.329 00:29:42.329 Latency(us) 00:29:42.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.329 =================================================================================================================== 00:29:42.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146092' 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 146092 00:29:42.329 00:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 146092 00:29:42.329 [2024-07-25 00:56:04.923002] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:42.588 [2024-07-25 00:56:05.162840] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:29:44.495 00:29:44.495 real 0m17.991s 00:29:44.495 user 0m27.095s 00:29:44.495 sys 0m2.297s 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:44.495 ************************************ 00:29:44.495 END TEST raid_rebuild_test_io 00:29:44.495 ************************************ 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:44.495 00:56:06 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:29:44.495 00:56:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:44.495 00:56:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.495 00:56:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:44.495 ************************************ 00:29:44.495 START TEST raid_rebuild_test_sb_io 00:29:44.495 ************************************ 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=146563 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 146563 /var/tmp/spdk-raid.sock 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 146563 ']' 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:44.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:44.495 00:56:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:44.495 [2024-07-25 00:56:06.791906] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:29:44.495 [2024-07-25 00:56:06.792942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146563 ] 00:29:44.495 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:44.495 Zero copy mechanism will not be used. 00:29:44.495 [2024-07-25 00:56:06.972572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.755 [2024-07-25 00:56:07.227929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.013 [2024-07-25 00:56:07.465451] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:45.272 00:56:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:45.272 00:56:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:29:45.272 00:56:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:45.272 00:56:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:45.272 BaseBdev1_malloc 00:29:45.272 00:56:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:45.531 [2024-07-25 00:56:08.136350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:45.531 [2024-07-25 00:56:08.136726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:45.531 [2024-07-25 00:56:08.136812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:29:45.531 [2024-07-25 00:56:08.136942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:45.531 [2024-07-25 00:56:08.139789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:45.531 [2024-07-25 00:56:08.139976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:45.531 BaseBdev1 00:29:45.531 00:56:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:45.531 00:56:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:45.790 BaseBdev2_malloc 00:29:45.790 00:56:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:46.049 [2024-07-25 00:56:08.554390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:46.049 [2024-07-25 00:56:08.554720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:46.049 [2024-07-25 00:56:08.554889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:29:46.049 [2024-07-25 00:56:08.554999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:46.049 [2024-07-25 00:56:08.557754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:46.049 [2024-07-25 00:56:08.557915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:46.049 BaseBdev2 00:29:46.049 00:56:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:46.307 spare_malloc 00:29:46.307 00:56:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:46.567 spare_delay 00:29:46.567 00:56:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:46.567 [2024-07-25 00:56:09.130904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:46.567 [2024-07-25 00:56:09.131255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:46.567 [2024-07-25 00:56:09.131404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:46.567 [2024-07-25 00:56:09.131499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:46.567 [2024-07-25 00:56:09.134427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:46.567 [2024-07-25 00:56:09.134585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:46.567 spare 00:29:46.567 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:29:46.826 [2024-07-25 00:56:09.303144] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:46.826 [2024-07-25 00:56:09.305738] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:46.826 [2024-07-25 00:56:09.306081] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:29:46.826 [2024-07-25 00:56:09.306206] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:46.826 [2024-07-25 00:56:09.306473] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:46.826 [2024-07-25 00:56:09.306935] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:29:46.826 [2024-07-25 00:56:09.307039] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:29:46.826 [2024-07-25 00:56:09.307314] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.826 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.085 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:47.085 "name": "raid_bdev1", 00:29:47.085 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:29:47.085 "strip_size_kb": 0, 00:29:47.085 "state": "online", 00:29:47.085 "raid_level": "raid1", 00:29:47.085 "superblock": true, 00:29:47.085 "num_base_bdevs": 2, 00:29:47.085 "num_base_bdevs_discovered": 2, 00:29:47.085 "num_base_bdevs_operational": 2, 00:29:47.085 "base_bdevs_list": [ 00:29:47.085 { 00:29:47.085 "name": "BaseBdev1", 00:29:47.085 "uuid": "b1fa571f-fbd2-51c9-aa7d-4fb9595c5fa9", 00:29:47.085 "is_configured": true, 00:29:47.085 "data_offset": 2048, 00:29:47.085 "data_size": 63488 00:29:47.085 }, 00:29:47.085 { 00:29:47.085 "name": "BaseBdev2", 00:29:47.085 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:29:47.085 "is_configured": true, 00:29:47.085 "data_offset": 2048, 00:29:47.085 "data_size": 63488 00:29:47.085 } 00:29:47.085 ] 00:29:47.085 }' 00:29:47.085 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:47.085 00:56:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:47.653 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:47.653 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:47.653 [2024-07-25 00:56:10.243719] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:47.653 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:29:47.653 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.653 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:47.912 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:29:47.912 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:47.912 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:47.912 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:47.912 [2024-07-25 00:56:10.550515] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:47.912 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:47.912 Zero copy mechanism will not be used. 00:29:47.912 Running I/O for 60 seconds... 00:29:48.171 [2024-07-25 00:56:10.600471] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:48.171 [2024-07-25 00:56:10.600909] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.171 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.430 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:48.430 "name": "raid_bdev1", 00:29:48.430 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:29:48.430 "strip_size_kb": 0, 00:29:48.430 "state": "online", 00:29:48.430 "raid_level": "raid1", 00:29:48.430 "superblock": true, 00:29:48.430 "num_base_bdevs": 2, 00:29:48.430 "num_base_bdevs_discovered": 1, 00:29:48.430 "num_base_bdevs_operational": 1, 00:29:48.430 "base_bdevs_list": [ 00:29:48.430 { 00:29:48.430 "name": null, 00:29:48.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.430 "is_configured": false, 00:29:48.430 "data_offset": 2048, 00:29:48.430 "data_size": 63488 00:29:48.430 }, 00:29:48.430 { 00:29:48.430 "name": "BaseBdev2", 00:29:48.430 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:29:48.430 "is_configured": true, 00:29:48.430 "data_offset": 2048, 00:29:48.430 "data_size": 63488 00:29:48.430 } 00:29:48.430 ] 00:29:48.430 }' 00:29:48.430 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:48.430 00:56:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:48.996 00:56:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:49.254 [2024-07-25 00:56:11.691162] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:49.254 [2024-07-25 00:56:11.747980] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:49.254 [2024-07-25 00:56:11.750201] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:49.254 00:56:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:49.254 [2024-07-25 00:56:11.857879] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:49.254 [2024-07-25 00:56:11.858495] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:49.512 [2024-07-25 00:56:12.072372] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:49.512 [2024-07-25 00:56:12.072729] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:49.774 [2024-07-25 00:56:12.396567] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:50.032 [2024-07-25 00:56:12.621009] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:50.292 00:56:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:50.293 00:56:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:50.293 00:56:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:50.293 00:56:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:50.293 00:56:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:50.293 00:56:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.293 00:56:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.552 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:50.552 "name": "raid_bdev1", 00:29:50.552 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:29:50.552 "strip_size_kb": 0, 00:29:50.552 "state": "online", 00:29:50.552 "raid_level": "raid1", 00:29:50.552 "superblock": true, 00:29:50.552 "num_base_bdevs": 2, 00:29:50.552 "num_base_bdevs_discovered": 2, 00:29:50.552 "num_base_bdevs_operational": 2, 00:29:50.552 "process": { 00:29:50.552 "type": "rebuild", 00:29:50.552 "target": "spare", 00:29:50.552 "progress": { 00:29:50.552 "blocks": 16384, 00:29:50.552 "percent": 25 00:29:50.552 } 00:29:50.552 }, 00:29:50.552 "base_bdevs_list": [ 00:29:50.552 { 00:29:50.552 "name": "spare", 00:29:50.552 "uuid": "77a3396f-f53f-560d-99d8-1143477d2bab", 00:29:50.552 "is_configured": true, 00:29:50.552 "data_offset": 2048, 00:29:50.552 "data_size": 63488 00:29:50.552 }, 00:29:50.552 { 00:29:50.552 "name": "BaseBdev2", 00:29:50.552 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:29:50.552 "is_configured": true, 00:29:50.552 "data_offset": 2048, 00:29:50.552 "data_size": 63488 00:29:50.552 } 00:29:50.552 ] 00:29:50.552 }' 00:29:50.552 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:50.552 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:50.552 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:50.552 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:50.552 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:50.811 [2024-07-25 00:56:13.264449] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:50.811 [2024-07-25 00:56:13.296885] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:50.811 [2024-07-25 00:56:13.297326] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:50.811 [2024-07-25 00:56:13.399022] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:50.811 [2024-07-25 00:56:13.412028] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:50.811 [2024-07-25 00:56:13.412186] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:50.811 [2024-07-25 00:56:13.412230] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:50.811 [2024-07-25 00:56:13.443680] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:29:51.069 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:51.070 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:51.070 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:51.070 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:51.070 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:51.070 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:51.070 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:51.070 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:51.070 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:51.070 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:51.070 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.070 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.328 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:51.328 "name": "raid_bdev1", 00:29:51.328 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:29:51.328 "strip_size_kb": 0, 00:29:51.328 "state": "online", 00:29:51.328 "raid_level": "raid1", 00:29:51.328 "superblock": true, 00:29:51.328 "num_base_bdevs": 2, 00:29:51.328 "num_base_bdevs_discovered": 1, 00:29:51.328 "num_base_bdevs_operational": 1, 00:29:51.329 "base_bdevs_list": [ 00:29:51.329 { 00:29:51.329 "name": null, 00:29:51.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.329 "is_configured": false, 00:29:51.329 "data_offset": 2048, 00:29:51.329 "data_size": 63488 00:29:51.329 }, 00:29:51.329 { 00:29:51.329 "name": "BaseBdev2", 00:29:51.329 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:29:51.329 "is_configured": true, 00:29:51.329 "data_offset": 2048, 00:29:51.329 "data_size": 63488 00:29:51.329 } 00:29:51.329 ] 00:29:51.329 }' 00:29:51.329 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:51.329 00:56:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:51.896 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:51.896 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:51.896 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:51.896 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:51.896 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:51.896 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.896 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.896 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:51.896 "name": "raid_bdev1", 00:29:51.896 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:29:51.896 "strip_size_kb": 0, 00:29:51.896 "state": "online", 00:29:51.896 "raid_level": "raid1", 00:29:51.896 "superblock": true, 00:29:51.896 "num_base_bdevs": 2, 00:29:51.896 "num_base_bdevs_discovered": 1, 00:29:51.896 "num_base_bdevs_operational": 1, 00:29:51.896 "base_bdevs_list": [ 00:29:51.896 { 00:29:51.896 "name": null, 00:29:51.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.896 "is_configured": false, 00:29:51.896 "data_offset": 2048, 00:29:51.896 "data_size": 63488 00:29:51.896 }, 00:29:51.896 { 00:29:51.896 "name": "BaseBdev2", 00:29:51.896 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:29:51.896 "is_configured": true, 00:29:51.896 "data_offset": 2048, 00:29:51.896 "data_size": 63488 00:29:51.896 } 00:29:51.896 ] 00:29:51.897 }' 00:29:51.897 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:51.897 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:51.897 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:51.897 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:51.897 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:52.156 [2024-07-25 00:56:14.774449] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:52.415 00:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:52.415 [2024-07-25 00:56:14.821632] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:52.415 [2024-07-25 00:56:14.823720] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:52.415 [2024-07-25 00:56:14.949799] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:52.415 [2024-07-25 00:56:14.950394] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:52.674 [2024-07-25 00:56:15.159412] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:52.674 [2024-07-25 00:56:15.159754] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:52.933 [2024-07-25 00:56:15.394588] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:52.933 [2024-07-25 00:56:15.395235] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:53.267 [2024-07-25 00:56:15.724425] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:53.267 00:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:53.267 00:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:53.267 00:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:53.267 00:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:53.267 00:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:53.267 00:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.267 00:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.525 [2024-07-25 00:56:15.933889] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:53.525 "name": "raid_bdev1", 00:29:53.525 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:29:53.525 "strip_size_kb": 0, 00:29:53.525 "state": "online", 00:29:53.525 "raid_level": "raid1", 00:29:53.525 "superblock": true, 00:29:53.525 "num_base_bdevs": 2, 00:29:53.525 "num_base_bdevs_discovered": 2, 00:29:53.525 "num_base_bdevs_operational": 2, 00:29:53.525 "process": { 00:29:53.525 "type": "rebuild", 00:29:53.525 "target": "spare", 00:29:53.525 "progress": { 00:29:53.525 "blocks": 18432, 00:29:53.525 "percent": 29 00:29:53.525 } 00:29:53.525 }, 00:29:53.525 "base_bdevs_list": [ 00:29:53.525 { 00:29:53.525 "name": "spare", 00:29:53.525 "uuid": "77a3396f-f53f-560d-99d8-1143477d2bab", 00:29:53.525 "is_configured": true, 00:29:53.525 "data_offset": 2048, 00:29:53.525 "data_size": 63488 00:29:53.525 }, 00:29:53.525 { 00:29:53.525 "name": "BaseBdev2", 00:29:53.525 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:29:53.525 "is_configured": true, 00:29:53.525 "data_offset": 2048, 00:29:53.525 "data_size": 63488 00:29:53.525 } 00:29:53.525 ] 00:29:53.525 }' 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:53.525 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=853 00:29:53.525 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:53.526 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:53.526 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:53.526 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:53.526 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:53.526 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:53.785 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.785 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.785 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:53.785 "name": "raid_bdev1", 00:29:53.785 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:29:53.785 "strip_size_kb": 0, 00:29:53.785 "state": "online", 00:29:53.785 "raid_level": "raid1", 00:29:53.785 "superblock": true, 00:29:53.785 "num_base_bdevs": 2, 00:29:53.785 "num_base_bdevs_discovered": 2, 00:29:53.785 "num_base_bdevs_operational": 2, 00:29:53.785 "process": { 00:29:53.785 "type": "rebuild", 00:29:53.785 "target": "spare", 00:29:53.785 "progress": { 00:29:53.785 "blocks": 24576, 00:29:53.785 "percent": 38 00:29:53.785 } 00:29:53.785 }, 00:29:53.785 "base_bdevs_list": [ 00:29:53.785 { 00:29:53.785 "name": "spare", 00:29:53.785 "uuid": "77a3396f-f53f-560d-99d8-1143477d2bab", 00:29:53.785 "is_configured": true, 00:29:53.785 "data_offset": 2048, 00:29:53.785 "data_size": 63488 00:29:53.785 }, 00:29:53.785 { 00:29:53.785 "name": "BaseBdev2", 00:29:53.785 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:29:53.785 "is_configured": true, 00:29:53.785 "data_offset": 2048, 00:29:53.785 "data_size": 63488 00:29:53.785 } 00:29:53.785 ] 00:29:53.785 }' 00:29:53.785 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:54.044 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:54.044 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:54.044 [2024-07-25 00:56:16.471274] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:29:54.044 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:54.044 00:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:54.044 [2024-07-25 00:56:16.687212] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:54.612 [2024-07-25 00:56:17.016168] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:54.871 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:54.871 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:54.871 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:54.872 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:54.872 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:54.872 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:54.872 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.872 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.131 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:55.131 "name": "raid_bdev1", 00:29:55.131 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:29:55.131 "strip_size_kb": 0, 00:29:55.131 "state": "online", 00:29:55.131 "raid_level": "raid1", 00:29:55.131 "superblock": true, 00:29:55.131 "num_base_bdevs": 2, 00:29:55.131 "num_base_bdevs_discovered": 2, 00:29:55.131 "num_base_bdevs_operational": 2, 00:29:55.131 "process": { 00:29:55.131 "type": "rebuild", 00:29:55.131 "target": "spare", 00:29:55.131 "progress": { 00:29:55.131 "blocks": 45056, 00:29:55.131 "percent": 70 00:29:55.131 } 00:29:55.131 }, 00:29:55.131 "base_bdevs_list": [ 00:29:55.131 { 00:29:55.131 "name": "spare", 00:29:55.131 "uuid": "77a3396f-f53f-560d-99d8-1143477d2bab", 00:29:55.131 "is_configured": true, 00:29:55.131 "data_offset": 2048, 00:29:55.131 "data_size": 63488 00:29:55.131 }, 00:29:55.131 { 00:29:55.131 "name": "BaseBdev2", 00:29:55.131 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:29:55.131 "is_configured": true, 00:29:55.131 "data_offset": 2048, 00:29:55.131 "data_size": 63488 00:29:55.131 } 00:29:55.131 ] 00:29:55.131 }' 00:29:55.131 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:55.390 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:55.390 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:55.390 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:55.390 00:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:55.390 [2024-07-25 00:56:18.031308] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:29:56.327 [2024-07-25 00:56:18.692395] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:56.327 [2024-07-25 00:56:18.797577] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:56.327 [2024-07-25 00:56:18.799562] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:56.327 00:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:56.327 00:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:56.327 00:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:56.327 00:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:56.327 00:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:56.327 00:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:56.327 00:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.327 00:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.586 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:56.586 "name": "raid_bdev1", 00:29:56.586 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:29:56.586 "strip_size_kb": 0, 00:29:56.586 "state": "online", 00:29:56.586 "raid_level": "raid1", 00:29:56.586 "superblock": true, 00:29:56.586 "num_base_bdevs": 2, 00:29:56.586 "num_base_bdevs_discovered": 2, 00:29:56.586 "num_base_bdevs_operational": 2, 00:29:56.586 "base_bdevs_list": [ 00:29:56.586 { 00:29:56.586 "name": "spare", 00:29:56.586 "uuid": "77a3396f-f53f-560d-99d8-1143477d2bab", 00:29:56.586 "is_configured": true, 00:29:56.586 "data_offset": 2048, 00:29:56.586 "data_size": 63488 00:29:56.586 }, 00:29:56.586 { 00:29:56.586 "name": "BaseBdev2", 00:29:56.586 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:29:56.586 "is_configured": true, 00:29:56.586 "data_offset": 2048, 00:29:56.586 "data_size": 63488 00:29:56.586 } 00:29:56.586 ] 00:29:56.586 }' 00:29:56.586 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:56.586 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:56.586 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:56.586 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:56.586 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:29:56.586 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:56.586 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:56.586 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:56.586 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:56.587 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:56.587 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.587 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:56.846 "name": "raid_bdev1", 00:29:56.846 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:29:56.846 "strip_size_kb": 0, 00:29:56.846 "state": "online", 00:29:56.846 "raid_level": "raid1", 00:29:56.846 "superblock": true, 00:29:56.846 "num_base_bdevs": 2, 00:29:56.846 "num_base_bdevs_discovered": 2, 00:29:56.846 "num_base_bdevs_operational": 2, 00:29:56.846 "base_bdevs_list": [ 00:29:56.846 { 00:29:56.846 "name": "spare", 00:29:56.846 "uuid": "77a3396f-f53f-560d-99d8-1143477d2bab", 00:29:56.846 "is_configured": true, 00:29:56.846 "data_offset": 2048, 00:29:56.846 "data_size": 63488 00:29:56.846 }, 00:29:56.846 { 00:29:56.846 "name": "BaseBdev2", 00:29:56.846 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:29:56.846 "is_configured": true, 00:29:56.846 "data_offset": 2048, 00:29:56.846 "data_size": 63488 00:29:56.846 } 00:29:56.846 ] 00:29:56.846 }' 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.846 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.105 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:57.105 "name": "raid_bdev1", 00:29:57.105 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:29:57.105 "strip_size_kb": 0, 00:29:57.105 "state": "online", 00:29:57.105 "raid_level": "raid1", 00:29:57.105 "superblock": true, 00:29:57.105 "num_base_bdevs": 2, 00:29:57.105 "num_base_bdevs_discovered": 2, 00:29:57.105 "num_base_bdevs_operational": 2, 00:29:57.105 "base_bdevs_list": [ 00:29:57.105 { 00:29:57.105 "name": "spare", 00:29:57.105 "uuid": "77a3396f-f53f-560d-99d8-1143477d2bab", 00:29:57.105 "is_configured": true, 00:29:57.105 "data_offset": 2048, 00:29:57.105 "data_size": 63488 00:29:57.105 }, 00:29:57.105 { 00:29:57.105 "name": "BaseBdev2", 00:29:57.105 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:29:57.105 "is_configured": true, 00:29:57.105 "data_offset": 2048, 00:29:57.105 "data_size": 63488 00:29:57.105 } 00:29:57.105 ] 00:29:57.105 }' 00:29:57.105 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:57.105 00:56:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:57.674 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:57.933 [2024-07-25 00:56:20.337130] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:57.933 [2024-07-25 00:56:20.337331] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:57.933 00:29:57.933 Latency(us) 00:29:57.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.933 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:57.933 raid_bdev1 : 9.86 116.63 349.90 0.00 0.00 12144.22 306.22 110350.14 00:29:57.933 =================================================================================================================== 00:29:57.933 Total : 116.63 349.90 0.00 0.00 12144.22 306.22 110350.14 00:29:57.933 [2024-07-25 00:56:20.434133] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:57.933 [2024-07-25 00:56:20.434376] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:57.933 [2024-07-25 00:56:20.434489] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:57.933 [2024-07-25 00:56:20.434752] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:29:57.933 0 00:29:57.933 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.933 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:58.193 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:58.453 /dev/nbd0 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:58.453 1+0 records in 00:29:58.453 1+0 records out 00:29:58.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048128 s, 8.5 MB/s 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:58.453 00:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:29:58.712 /dev/nbd1 00:29:58.712 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:58.712 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:58.713 1+0 records in 00:29:58.713 1+0 records out 00:29:58.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408703 s, 10.0 MB/s 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:58.713 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:58.972 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:59.231 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:59.231 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:59.231 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:59.231 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:59.231 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:59.232 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:59.232 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:59.232 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:59.232 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:59.232 00:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:59.490 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:59.750 [2024-07-25 00:56:22.379005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:59.750 [2024-07-25 00:56:22.379300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:59.750 [2024-07-25 00:56:22.379395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:59.750 [2024-07-25 00:56:22.379515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:59.750 [2024-07-25 00:56:22.381912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:59.750 [2024-07-25 00:56:22.382084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:59.750 [2024-07-25 00:56:22.382306] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:59.750 [2024-07-25 00:56:22.382439] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:59.750 [2024-07-25 00:56:22.382633] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:59.750 spare 00:29:59.750 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:59.750 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:59.750 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:59.750 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:59.750 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:59.750 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:59.750 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:59.750 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:59.750 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:59.750 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:00.010 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.010 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.010 [2024-07-25 00:56:22.482834] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:30:00.010 [2024-07-25 00:56:22.483009] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:00.010 [2024-07-25 00:56:22.483195] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:30:00.010 [2024-07-25 00:56:22.483855] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:30:00.010 [2024-07-25 00:56:22.483975] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:30:00.010 [2024-07-25 00:56:22.484210] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:00.010 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:00.010 "name": "raid_bdev1", 00:30:00.010 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:00.010 "strip_size_kb": 0, 00:30:00.010 "state": "online", 00:30:00.010 "raid_level": "raid1", 00:30:00.010 "superblock": true, 00:30:00.010 "num_base_bdevs": 2, 00:30:00.010 "num_base_bdevs_discovered": 2, 00:30:00.010 "num_base_bdevs_operational": 2, 00:30:00.010 "base_bdevs_list": [ 00:30:00.010 { 00:30:00.010 "name": "spare", 00:30:00.010 "uuid": "77a3396f-f53f-560d-99d8-1143477d2bab", 00:30:00.010 "is_configured": true, 00:30:00.010 "data_offset": 2048, 00:30:00.010 "data_size": 63488 00:30:00.010 }, 00:30:00.010 { 00:30:00.010 "name": "BaseBdev2", 00:30:00.010 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:00.010 "is_configured": true, 00:30:00.010 "data_offset": 2048, 00:30:00.010 "data_size": 63488 00:30:00.010 } 00:30:00.010 ] 00:30:00.010 }' 00:30:00.010 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:00.010 00:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:00.579 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:00.579 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:00.579 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:00.579 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:00.579 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:00.579 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.579 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.837 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:00.837 "name": "raid_bdev1", 00:30:00.837 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:00.837 "strip_size_kb": 0, 00:30:00.837 "state": "online", 00:30:00.837 "raid_level": "raid1", 00:30:00.837 "superblock": true, 00:30:00.837 "num_base_bdevs": 2, 00:30:00.837 "num_base_bdevs_discovered": 2, 00:30:00.837 "num_base_bdevs_operational": 2, 00:30:00.837 "base_bdevs_list": [ 00:30:00.837 { 00:30:00.837 "name": "spare", 00:30:00.837 "uuid": "77a3396f-f53f-560d-99d8-1143477d2bab", 00:30:00.837 "is_configured": true, 00:30:00.837 "data_offset": 2048, 00:30:00.837 "data_size": 63488 00:30:00.837 }, 00:30:00.837 { 00:30:00.837 "name": "BaseBdev2", 00:30:00.837 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:00.837 "is_configured": true, 00:30:00.837 "data_offset": 2048, 00:30:00.837 "data_size": 63488 00:30:00.837 } 00:30:00.837 ] 00:30:00.837 }' 00:30:00.837 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:00.837 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:00.837 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:00.837 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:01.095 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:01.095 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.353 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:30:01.354 00:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:01.612 [2024-07-25 00:56:24.008615] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.612 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.871 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:01.871 "name": "raid_bdev1", 00:30:01.871 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:01.871 "strip_size_kb": 0, 00:30:01.871 "state": "online", 00:30:01.871 "raid_level": "raid1", 00:30:01.871 "superblock": true, 00:30:01.871 "num_base_bdevs": 2, 00:30:01.871 "num_base_bdevs_discovered": 1, 00:30:01.871 "num_base_bdevs_operational": 1, 00:30:01.871 "base_bdevs_list": [ 00:30:01.871 { 00:30:01.871 "name": null, 00:30:01.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.871 "is_configured": false, 00:30:01.871 "data_offset": 2048, 00:30:01.871 "data_size": 63488 00:30:01.871 }, 00:30:01.871 { 00:30:01.871 "name": "BaseBdev2", 00:30:01.872 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:01.872 "is_configured": true, 00:30:01.872 "data_offset": 2048, 00:30:01.872 "data_size": 63488 00:30:01.872 } 00:30:01.872 ] 00:30:01.872 }' 00:30:01.872 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:01.872 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:02.439 00:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:02.439 [2024-07-25 00:56:25.080935] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:02.439 [2024-07-25 00:56:25.081291] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:02.439 [2024-07-25 00:56:25.081397] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:02.439 [2024-07-25 00:56:25.081523] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:02.700 [2024-07-25 00:56:25.097107] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b340 00:30:02.700 [2024-07-25 00:56:25.099168] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:02.700 00:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:30:03.657 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:03.657 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:03.657 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:03.657 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:03.657 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:03.657 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.657 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.916 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:03.916 "name": "raid_bdev1", 00:30:03.916 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:03.916 "strip_size_kb": 0, 00:30:03.916 "state": "online", 00:30:03.916 "raid_level": "raid1", 00:30:03.916 "superblock": true, 00:30:03.916 "num_base_bdevs": 2, 00:30:03.916 "num_base_bdevs_discovered": 2, 00:30:03.916 "num_base_bdevs_operational": 2, 00:30:03.916 "process": { 00:30:03.916 "type": "rebuild", 00:30:03.916 "target": "spare", 00:30:03.916 "progress": { 00:30:03.916 "blocks": 24576, 00:30:03.916 "percent": 38 00:30:03.916 } 00:30:03.916 }, 00:30:03.916 "base_bdevs_list": [ 00:30:03.916 { 00:30:03.916 "name": "spare", 00:30:03.916 "uuid": "77a3396f-f53f-560d-99d8-1143477d2bab", 00:30:03.916 "is_configured": true, 00:30:03.916 "data_offset": 2048, 00:30:03.916 "data_size": 63488 00:30:03.916 }, 00:30:03.916 { 00:30:03.916 "name": "BaseBdev2", 00:30:03.916 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:03.916 "is_configured": true, 00:30:03.916 "data_offset": 2048, 00:30:03.916 "data_size": 63488 00:30:03.916 } 00:30:03.916 ] 00:30:03.916 }' 00:30:03.916 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:03.916 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:03.916 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:03.916 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:03.916 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:04.176 [2024-07-25 00:56:26.685100] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:04.176 [2024-07-25 00:56:26.708583] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:04.176 [2024-07-25 00:56:26.708786] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:04.176 [2024-07-25 00:56:26.708838] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:04.176 [2024-07-25 00:56:26.708915] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:04.176 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.435 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:04.435 "name": "raid_bdev1", 00:30:04.435 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:04.435 "strip_size_kb": 0, 00:30:04.435 "state": "online", 00:30:04.435 "raid_level": "raid1", 00:30:04.435 "superblock": true, 00:30:04.435 "num_base_bdevs": 2, 00:30:04.435 "num_base_bdevs_discovered": 1, 00:30:04.435 "num_base_bdevs_operational": 1, 00:30:04.435 "base_bdevs_list": [ 00:30:04.435 { 00:30:04.435 "name": null, 00:30:04.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.435 "is_configured": false, 00:30:04.435 "data_offset": 2048, 00:30:04.435 "data_size": 63488 00:30:04.435 }, 00:30:04.435 { 00:30:04.435 "name": "BaseBdev2", 00:30:04.435 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:04.435 "is_configured": true, 00:30:04.435 "data_offset": 2048, 00:30:04.435 "data_size": 63488 00:30:04.435 } 00:30:04.435 ] 00:30:04.435 }' 00:30:04.435 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:04.435 00:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:05.003 00:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:05.262 [2024-07-25 00:56:27.783017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:05.262 [2024-07-25 00:56:27.783294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:05.262 [2024-07-25 00:56:27.783371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:30:05.262 [2024-07-25 00:56:27.783531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:05.262 [2024-07-25 00:56:27.784077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:05.262 [2024-07-25 00:56:27.784240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:05.262 [2024-07-25 00:56:27.784450] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:05.262 [2024-07-25 00:56:27.784540] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:05.262 [2024-07-25 00:56:27.784642] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:05.262 [2024-07-25 00:56:27.784732] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:05.262 [2024-07-25 00:56:27.800094] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:30:05.262 spare 00:30:05.262 [2024-07-25 00:56:27.802157] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:05.262 00:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:30:06.200 00:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:06.200 00:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:06.200 00:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:06.200 00:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:06.200 00:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:06.200 00:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.200 00:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.459 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:06.459 "name": "raid_bdev1", 00:30:06.459 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:06.459 "strip_size_kb": 0, 00:30:06.459 "state": "online", 00:30:06.459 "raid_level": "raid1", 00:30:06.459 "superblock": true, 00:30:06.459 "num_base_bdevs": 2, 00:30:06.459 "num_base_bdevs_discovered": 2, 00:30:06.459 "num_base_bdevs_operational": 2, 00:30:06.459 "process": { 00:30:06.459 "type": "rebuild", 00:30:06.459 "target": "spare", 00:30:06.459 "progress": { 00:30:06.459 "blocks": 24576, 00:30:06.459 "percent": 38 00:30:06.459 } 00:30:06.459 }, 00:30:06.459 "base_bdevs_list": [ 00:30:06.459 { 00:30:06.459 "name": "spare", 00:30:06.459 "uuid": "77a3396f-f53f-560d-99d8-1143477d2bab", 00:30:06.459 "is_configured": true, 00:30:06.459 "data_offset": 2048, 00:30:06.459 "data_size": 63488 00:30:06.459 }, 00:30:06.459 { 00:30:06.459 "name": "BaseBdev2", 00:30:06.459 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:06.459 "is_configured": true, 00:30:06.459 "data_offset": 2048, 00:30:06.459 "data_size": 63488 00:30:06.459 } 00:30:06.459 ] 00:30:06.459 }' 00:30:06.459 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:06.718 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:06.718 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:06.718 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:06.718 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:06.718 [2024-07-25 00:56:29.339960] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:06.977 [2024-07-25 00:56:29.411527] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:06.977 [2024-07-25 00:56:29.411746] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:06.977 [2024-07-25 00:56:29.411798] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:06.977 [2024-07-25 00:56:29.411876] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.977 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.236 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:07.236 "name": "raid_bdev1", 00:30:07.236 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:07.236 "strip_size_kb": 0, 00:30:07.236 "state": "online", 00:30:07.236 "raid_level": "raid1", 00:30:07.236 "superblock": true, 00:30:07.236 "num_base_bdevs": 2, 00:30:07.236 "num_base_bdevs_discovered": 1, 00:30:07.236 "num_base_bdevs_operational": 1, 00:30:07.236 "base_bdevs_list": [ 00:30:07.236 { 00:30:07.236 "name": null, 00:30:07.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.236 "is_configured": false, 00:30:07.236 "data_offset": 2048, 00:30:07.236 "data_size": 63488 00:30:07.236 }, 00:30:07.236 { 00:30:07.236 "name": "BaseBdev2", 00:30:07.236 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:07.236 "is_configured": true, 00:30:07.236 "data_offset": 2048, 00:30:07.236 "data_size": 63488 00:30:07.236 } 00:30:07.236 ] 00:30:07.236 }' 00:30:07.236 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:07.236 00:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:07.804 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:07.804 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:07.804 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:07.804 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:07.804 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:07.804 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.804 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.062 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:08.062 "name": "raid_bdev1", 00:30:08.062 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:08.062 "strip_size_kb": 0, 00:30:08.062 "state": "online", 00:30:08.062 "raid_level": "raid1", 00:30:08.062 "superblock": true, 00:30:08.062 "num_base_bdevs": 2, 00:30:08.062 "num_base_bdevs_discovered": 1, 00:30:08.062 "num_base_bdevs_operational": 1, 00:30:08.062 "base_bdevs_list": [ 00:30:08.062 { 00:30:08.062 "name": null, 00:30:08.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.062 "is_configured": false, 00:30:08.062 "data_offset": 2048, 00:30:08.062 "data_size": 63488 00:30:08.062 }, 00:30:08.062 { 00:30:08.062 "name": "BaseBdev2", 00:30:08.062 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:08.062 "is_configured": true, 00:30:08.062 "data_offset": 2048, 00:30:08.062 "data_size": 63488 00:30:08.062 } 00:30:08.062 ] 00:30:08.062 }' 00:30:08.062 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:08.062 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:08.063 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:08.063 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:08.063 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:08.322 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:08.322 [2024-07-25 00:56:30.918563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:08.322 [2024-07-25 00:56:30.918841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:08.322 [2024-07-25 00:56:30.918922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:30:08.322 [2024-07-25 00:56:30.919031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:08.322 [2024-07-25 00:56:30.919528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:08.322 [2024-07-25 00:56:30.919678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:08.322 [2024-07-25 00:56:30.919895] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:08.322 [2024-07-25 00:56:30.919998] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:08.322 [2024-07-25 00:56:30.920076] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:08.322 BaseBdev1 00:30:08.322 00:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.698 00:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.698 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:09.698 "name": "raid_bdev1", 00:30:09.698 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:09.698 "strip_size_kb": 0, 00:30:09.698 "state": "online", 00:30:09.698 "raid_level": "raid1", 00:30:09.698 "superblock": true, 00:30:09.698 "num_base_bdevs": 2, 00:30:09.698 "num_base_bdevs_discovered": 1, 00:30:09.698 "num_base_bdevs_operational": 1, 00:30:09.698 "base_bdevs_list": [ 00:30:09.698 { 00:30:09.698 "name": null, 00:30:09.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.698 "is_configured": false, 00:30:09.698 "data_offset": 2048, 00:30:09.698 "data_size": 63488 00:30:09.698 }, 00:30:09.698 { 00:30:09.698 "name": "BaseBdev2", 00:30:09.698 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:09.698 "is_configured": true, 00:30:09.698 "data_offset": 2048, 00:30:09.698 "data_size": 63488 00:30:09.698 } 00:30:09.698 ] 00:30:09.698 }' 00:30:09.698 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:09.698 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:10.266 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:10.266 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:10.266 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:10.266 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:10.266 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:10.266 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.266 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.525 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:10.525 "name": "raid_bdev1", 00:30:10.525 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:10.525 "strip_size_kb": 0, 00:30:10.525 "state": "online", 00:30:10.525 "raid_level": "raid1", 00:30:10.525 "superblock": true, 00:30:10.525 "num_base_bdevs": 2, 00:30:10.525 "num_base_bdevs_discovered": 1, 00:30:10.525 "num_base_bdevs_operational": 1, 00:30:10.525 "base_bdevs_list": [ 00:30:10.525 { 00:30:10.526 "name": null, 00:30:10.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.526 "is_configured": false, 00:30:10.526 "data_offset": 2048, 00:30:10.526 "data_size": 63488 00:30:10.526 }, 00:30:10.526 { 00:30:10.526 "name": "BaseBdev2", 00:30:10.526 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:10.526 "is_configured": true, 00:30:10.526 "data_offset": 2048, 00:30:10.526 "data_size": 63488 00:30:10.526 } 00:30:10.526 ] 00:30:10.526 }' 00:30:10.526 00:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:10.526 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:10.785 [2024-07-25 00:56:33.231395] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:10.785 [2024-07-25 00:56:33.231712] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:10.785 [2024-07-25 00:56:33.231845] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:10.785 request: 00:30:10.785 { 00:30:10.785 "base_bdev": "BaseBdev1", 00:30:10.785 "raid_bdev": "raid_bdev1", 00:30:10.785 "method": "bdev_raid_add_base_bdev", 00:30:10.785 "req_id": 1 00:30:10.785 } 00:30:10.785 Got JSON-RPC error response 00:30:10.785 response: 00:30:10.785 { 00:30:10.785 "code": -22, 00:30:10.785 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:10.785 } 00:30:10.785 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:30:10.785 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:10.785 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:10.785 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:10.785 00:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.720 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.979 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:11.979 "name": "raid_bdev1", 00:30:11.979 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:11.979 "strip_size_kb": 0, 00:30:11.979 "state": "online", 00:30:11.979 "raid_level": "raid1", 00:30:11.979 "superblock": true, 00:30:11.979 "num_base_bdevs": 2, 00:30:11.979 "num_base_bdevs_discovered": 1, 00:30:11.979 "num_base_bdevs_operational": 1, 00:30:11.979 "base_bdevs_list": [ 00:30:11.979 { 00:30:11.979 "name": null, 00:30:11.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:11.979 "is_configured": false, 00:30:11.979 "data_offset": 2048, 00:30:11.979 "data_size": 63488 00:30:11.979 }, 00:30:11.979 { 00:30:11.979 "name": "BaseBdev2", 00:30:11.979 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:11.979 "is_configured": true, 00:30:11.979 "data_offset": 2048, 00:30:11.979 "data_size": 63488 00:30:11.979 } 00:30:11.979 ] 00:30:11.979 }' 00:30:11.979 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:11.979 00:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:12.550 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:12.550 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:12.550 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:12.550 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:12.550 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:12.550 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.550 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:12.809 "name": "raid_bdev1", 00:30:12.809 "uuid": "36f18293-1a0d-472f-8156-d5c1487dae9f", 00:30:12.809 "strip_size_kb": 0, 00:30:12.809 "state": "online", 00:30:12.809 "raid_level": "raid1", 00:30:12.809 "superblock": true, 00:30:12.809 "num_base_bdevs": 2, 00:30:12.809 "num_base_bdevs_discovered": 1, 00:30:12.809 "num_base_bdevs_operational": 1, 00:30:12.809 "base_bdevs_list": [ 00:30:12.809 { 00:30:12.809 "name": null, 00:30:12.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.809 "is_configured": false, 00:30:12.809 "data_offset": 2048, 00:30:12.809 "data_size": 63488 00:30:12.809 }, 00:30:12.809 { 00:30:12.809 "name": "BaseBdev2", 00:30:12.809 "uuid": "cd388670-9d4b-55f3-8e3d-b08fc92a87d2", 00:30:12.809 "is_configured": true, 00:30:12.809 "data_offset": 2048, 00:30:12.809 "data_size": 63488 00:30:12.809 } 00:30:12.809 ] 00:30:12.809 }' 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 146563 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 146563 ']' 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 146563 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146563 00:30:12.809 killing process with pid 146563 00:30:12.809 Received shutdown signal, test time was about 24.821781 seconds 00:30:12.809 00:30:12.809 Latency(us) 00:30:12.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.809 =================================================================================================================== 00:30:12.809 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146563' 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 146563 00:30:12.809 00:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 146563 00:30:12.809 [2024-07-25 00:56:35.374806] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:12.809 [2024-07-25 00:56:35.374929] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:12.809 [2024-07-25 00:56:35.375094] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:12.809 [2024-07-25 00:56:35.375194] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:30:13.069 [2024-07-25 00:56:35.587649] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:14.445 ************************************ 00:30:14.445 END TEST raid_rebuild_test_sb_io 00:30:14.445 ************************************ 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:30:14.445 00:30:14.445 real 0m30.147s 00:30:14.445 user 0m46.618s 00:30:14.445 sys 0m3.870s 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:14.445 00:56:36 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:30:14.445 00:56:36 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:30:14.445 00:56:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:14.445 00:56:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.445 00:56:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:14.445 ************************************ 00:30:14.445 START TEST raid_rebuild_test 00:30:14.445 ************************************ 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=147409 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 147409 /var/tmp/spdk-raid.sock 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 147409 ']' 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:14.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:14.445 00:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.445 [2024-07-25 00:56:37.022606] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:30:14.445 [2024-07-25 00:56:37.023023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147409 ] 00:30:14.445 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:14.445 Zero copy mechanism will not be used. 00:30:14.703 [2024-07-25 00:56:37.203121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.962 [2024-07-25 00:56:37.383934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.962 [2024-07-25 00:56:37.566542] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:15.528 00:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:15.528 00:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:30:15.528 00:56:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:15.528 00:56:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:15.528 BaseBdev1_malloc 00:30:15.529 00:56:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:15.787 [2024-07-25 00:56:38.391641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:15.787 [2024-07-25 00:56:38.391899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:15.787 [2024-07-25 00:56:38.391993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:30:15.787 [2024-07-25 00:56:38.392181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:15.787 [2024-07-25 00:56:38.394570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:15.787 [2024-07-25 00:56:38.394754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:15.787 BaseBdev1 00:30:15.787 00:56:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:15.787 00:56:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:16.045 BaseBdev2_malloc 00:30:16.045 00:56:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:16.304 [2024-07-25 00:56:38.792103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:16.304 [2024-07-25 00:56:38.792396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.304 [2024-07-25 00:56:38.792492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:30:16.304 [2024-07-25 00:56:38.792607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.304 [2024-07-25 00:56:38.794880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.304 [2024-07-25 00:56:38.795076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:16.304 BaseBdev2 00:30:16.304 00:56:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:16.304 00:56:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:16.563 BaseBdev3_malloc 00:30:16.563 00:56:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:16.563 [2024-07-25 00:56:39.169136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:16.563 [2024-07-25 00:56:39.169418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.563 [2024-07-25 00:56:39.169499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:16.563 [2024-07-25 00:56:39.169599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.563 [2024-07-25 00:56:39.171970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.563 [2024-07-25 00:56:39.172148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:16.563 BaseBdev3 00:30:16.563 00:56:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:16.563 00:56:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:16.820 BaseBdev4_malloc 00:30:16.821 00:56:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:17.079 [2024-07-25 00:56:39.632984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:17.079 [2024-07-25 00:56:39.633276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.079 [2024-07-25 00:56:39.633423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:30:17.079 [2024-07-25 00:56:39.633544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.079 [2024-07-25 00:56:39.635924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.079 [2024-07-25 00:56:39.636091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:17.079 BaseBdev4 00:30:17.079 00:56:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:17.365 spare_malloc 00:30:17.365 00:56:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:17.365 spare_delay 00:30:17.625 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:17.625 [2024-07-25 00:56:40.184501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:17.625 [2024-07-25 00:56:40.184803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.625 [2024-07-25 00:56:40.184875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:17.625 [2024-07-25 00:56:40.185000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.625 [2024-07-25 00:56:40.187391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.625 [2024-07-25 00:56:40.187569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:17.625 spare 00:30:17.625 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:17.883 [2024-07-25 00:56:40.356575] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:17.883 [2024-07-25 00:56:40.358628] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:17.883 [2024-07-25 00:56:40.358847] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:17.883 [2024-07-25 00:56:40.358939] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:17.883 [2024-07-25 00:56:40.359128] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:30:17.883 [2024-07-25 00:56:40.359172] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:17.883 [2024-07-25 00:56:40.359409] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:17.883 [2024-07-25 00:56:40.359874] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:30:17.883 [2024-07-25 00:56:40.359984] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:30:17.883 [2024-07-25 00:56:40.360229] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.883 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.141 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:18.141 "name": "raid_bdev1", 00:30:18.141 "uuid": "4aee73ce-0d8c-4c29-bc85-7d43392a4245", 00:30:18.141 "strip_size_kb": 0, 00:30:18.141 "state": "online", 00:30:18.141 "raid_level": "raid1", 00:30:18.141 "superblock": false, 00:30:18.142 "num_base_bdevs": 4, 00:30:18.142 "num_base_bdevs_discovered": 4, 00:30:18.142 "num_base_bdevs_operational": 4, 00:30:18.142 "base_bdevs_list": [ 00:30:18.142 { 00:30:18.142 "name": "BaseBdev1", 00:30:18.142 "uuid": "cd87b1ae-0d1f-5b8b-bac9-cd03ac4d07ad", 00:30:18.142 "is_configured": true, 00:30:18.142 "data_offset": 0, 00:30:18.142 "data_size": 65536 00:30:18.142 }, 00:30:18.142 { 00:30:18.142 "name": "BaseBdev2", 00:30:18.142 "uuid": "46d9045a-6945-583d-bd90-3623f59bae3f", 00:30:18.142 "is_configured": true, 00:30:18.142 "data_offset": 0, 00:30:18.142 "data_size": 65536 00:30:18.142 }, 00:30:18.142 { 00:30:18.142 "name": "BaseBdev3", 00:30:18.142 "uuid": "11d27add-4dd9-557e-afe0-de96410447b5", 00:30:18.142 "is_configured": true, 00:30:18.142 "data_offset": 0, 00:30:18.142 "data_size": 65536 00:30:18.142 }, 00:30:18.142 { 00:30:18.142 "name": "BaseBdev4", 00:30:18.142 "uuid": "e6657e4a-65bd-59ff-8b82-4938b9f9a2c9", 00:30:18.142 "is_configured": true, 00:30:18.142 "data_offset": 0, 00:30:18.142 "data_size": 65536 00:30:18.142 } 00:30:18.142 ] 00:30:18.142 }' 00:30:18.142 00:56:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:18.142 00:56:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.709 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:18.709 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:18.709 [2024-07-25 00:56:41.340935] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:18.709 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:18.968 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:19.227 [2024-07-25 00:56:41.832851] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:19.227 /dev/nbd0 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:19.485 1+0 records in 00:30:19.485 1+0 records out 00:30:19.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361568 s, 11.3 MB/s 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:30:19.485 00:56:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:30:26.091 65536+0 records in 00:30:26.091 65536+0 records out 00:30:26.091 33554432 bytes (34 MB, 32 MiB) copied, 5.69154 s, 5.9 MB/s 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:26.091 [2024-07-25 00:56:47.855491] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:26.091 00:56:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:26.091 [2024-07-25 00:56:48.083264] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:26.091 "name": "raid_bdev1", 00:30:26.091 "uuid": "4aee73ce-0d8c-4c29-bc85-7d43392a4245", 00:30:26.091 "strip_size_kb": 0, 00:30:26.091 "state": "online", 00:30:26.091 "raid_level": "raid1", 00:30:26.091 "superblock": false, 00:30:26.091 "num_base_bdevs": 4, 00:30:26.091 "num_base_bdevs_discovered": 3, 00:30:26.091 "num_base_bdevs_operational": 3, 00:30:26.091 "base_bdevs_list": [ 00:30:26.091 { 00:30:26.091 "name": null, 00:30:26.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.091 "is_configured": false, 00:30:26.091 "data_offset": 0, 00:30:26.091 "data_size": 65536 00:30:26.091 }, 00:30:26.091 { 00:30:26.091 "name": "BaseBdev2", 00:30:26.091 "uuid": "46d9045a-6945-583d-bd90-3623f59bae3f", 00:30:26.091 "is_configured": true, 00:30:26.091 "data_offset": 0, 00:30:26.091 "data_size": 65536 00:30:26.091 }, 00:30:26.091 { 00:30:26.091 "name": "BaseBdev3", 00:30:26.091 "uuid": "11d27add-4dd9-557e-afe0-de96410447b5", 00:30:26.091 "is_configured": true, 00:30:26.091 "data_offset": 0, 00:30:26.091 "data_size": 65536 00:30:26.091 }, 00:30:26.091 { 00:30:26.091 "name": "BaseBdev4", 00:30:26.091 "uuid": "e6657e4a-65bd-59ff-8b82-4938b9f9a2c9", 00:30:26.091 "is_configured": true, 00:30:26.091 "data_offset": 0, 00:30:26.091 "data_size": 65536 00:30:26.091 } 00:30:26.091 ] 00:30:26.091 }' 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:26.091 00:56:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.349 00:56:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:26.608 [2024-07-25 00:56:49.095437] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:26.608 [2024-07-25 00:56:49.108520] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:30:26.608 [2024-07-25 00:56:49.110665] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:26.608 00:56:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:27.543 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:27.543 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:27.543 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:27.543 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:27.543 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:27.543 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.543 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.802 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:27.802 "name": "raid_bdev1", 00:30:27.802 "uuid": "4aee73ce-0d8c-4c29-bc85-7d43392a4245", 00:30:27.802 "strip_size_kb": 0, 00:30:27.802 "state": "online", 00:30:27.802 "raid_level": "raid1", 00:30:27.802 "superblock": false, 00:30:27.802 "num_base_bdevs": 4, 00:30:27.802 "num_base_bdevs_discovered": 4, 00:30:27.802 "num_base_bdevs_operational": 4, 00:30:27.802 "process": { 00:30:27.802 "type": "rebuild", 00:30:27.802 "target": "spare", 00:30:27.802 "progress": { 00:30:27.802 "blocks": 24576, 00:30:27.802 "percent": 37 00:30:27.802 } 00:30:27.802 }, 00:30:27.802 "base_bdevs_list": [ 00:30:27.802 { 00:30:27.802 "name": "spare", 00:30:27.802 "uuid": "52987a1e-fdeb-5994-8158-93cd55d4898e", 00:30:27.802 "is_configured": true, 00:30:27.802 "data_offset": 0, 00:30:27.802 "data_size": 65536 00:30:27.802 }, 00:30:27.802 { 00:30:27.802 "name": "BaseBdev2", 00:30:27.802 "uuid": "46d9045a-6945-583d-bd90-3623f59bae3f", 00:30:27.802 "is_configured": true, 00:30:27.802 "data_offset": 0, 00:30:27.802 "data_size": 65536 00:30:27.802 }, 00:30:27.802 { 00:30:27.802 "name": "BaseBdev3", 00:30:27.802 "uuid": "11d27add-4dd9-557e-afe0-de96410447b5", 00:30:27.802 "is_configured": true, 00:30:27.802 "data_offset": 0, 00:30:27.802 "data_size": 65536 00:30:27.802 }, 00:30:27.802 { 00:30:27.802 "name": "BaseBdev4", 00:30:27.802 "uuid": "e6657e4a-65bd-59ff-8b82-4938b9f9a2c9", 00:30:27.802 "is_configured": true, 00:30:27.802 "data_offset": 0, 00:30:27.802 "data_size": 65536 00:30:27.802 } 00:30:27.802 ] 00:30:27.802 }' 00:30:27.802 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:27.802 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:27.802 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:27.802 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:27.802 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:28.061 [2024-07-25 00:56:50.664773] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:28.320 [2024-07-25 00:56:50.719860] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:28.320 [2024-07-25 00:56:50.720100] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:28.320 [2024-07-25 00:56:50.720156] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:28.320 [2024-07-25 00:56:50.720240] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.320 00:56:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.580 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:28.580 "name": "raid_bdev1", 00:30:28.580 "uuid": "4aee73ce-0d8c-4c29-bc85-7d43392a4245", 00:30:28.580 "strip_size_kb": 0, 00:30:28.580 "state": "online", 00:30:28.580 "raid_level": "raid1", 00:30:28.580 "superblock": false, 00:30:28.580 "num_base_bdevs": 4, 00:30:28.580 "num_base_bdevs_discovered": 3, 00:30:28.580 "num_base_bdevs_operational": 3, 00:30:28.580 "base_bdevs_list": [ 00:30:28.580 { 00:30:28.580 "name": null, 00:30:28.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.580 "is_configured": false, 00:30:28.580 "data_offset": 0, 00:30:28.580 "data_size": 65536 00:30:28.580 }, 00:30:28.580 { 00:30:28.580 "name": "BaseBdev2", 00:30:28.580 "uuid": "46d9045a-6945-583d-bd90-3623f59bae3f", 00:30:28.580 "is_configured": true, 00:30:28.580 "data_offset": 0, 00:30:28.580 "data_size": 65536 00:30:28.580 }, 00:30:28.580 { 00:30:28.580 "name": "BaseBdev3", 00:30:28.580 "uuid": "11d27add-4dd9-557e-afe0-de96410447b5", 00:30:28.580 "is_configured": true, 00:30:28.580 "data_offset": 0, 00:30:28.580 "data_size": 65536 00:30:28.580 }, 00:30:28.580 { 00:30:28.580 "name": "BaseBdev4", 00:30:28.580 "uuid": "e6657e4a-65bd-59ff-8b82-4938b9f9a2c9", 00:30:28.580 "is_configured": true, 00:30:28.580 "data_offset": 0, 00:30:28.580 "data_size": 65536 00:30:28.580 } 00:30:28.580 ] 00:30:28.580 }' 00:30:28.580 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:28.580 00:56:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.148 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:29.148 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:29.148 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:29.148 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:29.148 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:29.148 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.148 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.407 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:29.407 "name": "raid_bdev1", 00:30:29.407 "uuid": "4aee73ce-0d8c-4c29-bc85-7d43392a4245", 00:30:29.407 "strip_size_kb": 0, 00:30:29.407 "state": "online", 00:30:29.407 "raid_level": "raid1", 00:30:29.407 "superblock": false, 00:30:29.407 "num_base_bdevs": 4, 00:30:29.407 "num_base_bdevs_discovered": 3, 00:30:29.407 "num_base_bdevs_operational": 3, 00:30:29.407 "base_bdevs_list": [ 00:30:29.407 { 00:30:29.407 "name": null, 00:30:29.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.407 "is_configured": false, 00:30:29.407 "data_offset": 0, 00:30:29.407 "data_size": 65536 00:30:29.407 }, 00:30:29.407 { 00:30:29.407 "name": "BaseBdev2", 00:30:29.407 "uuid": "46d9045a-6945-583d-bd90-3623f59bae3f", 00:30:29.407 "is_configured": true, 00:30:29.407 "data_offset": 0, 00:30:29.407 "data_size": 65536 00:30:29.407 }, 00:30:29.407 { 00:30:29.407 "name": "BaseBdev3", 00:30:29.407 "uuid": "11d27add-4dd9-557e-afe0-de96410447b5", 00:30:29.407 "is_configured": true, 00:30:29.407 "data_offset": 0, 00:30:29.407 "data_size": 65536 00:30:29.407 }, 00:30:29.407 { 00:30:29.407 "name": "BaseBdev4", 00:30:29.407 "uuid": "e6657e4a-65bd-59ff-8b82-4938b9f9a2c9", 00:30:29.407 "is_configured": true, 00:30:29.407 "data_offset": 0, 00:30:29.407 "data_size": 65536 00:30:29.407 } 00:30:29.407 ] 00:30:29.407 }' 00:30:29.407 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:29.407 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:29.407 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:29.407 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:29.407 00:56:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:29.666 [2024-07-25 00:56:52.152642] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:29.666 [2024-07-25 00:56:52.166997] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:30:29.666 [2024-07-25 00:56:52.169020] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:29.666 00:56:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:30.601 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:30.601 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:30.601 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:30.601 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:30.601 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:30.601 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.601 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.860 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:30.860 "name": "raid_bdev1", 00:30:30.860 "uuid": "4aee73ce-0d8c-4c29-bc85-7d43392a4245", 00:30:30.860 "strip_size_kb": 0, 00:30:30.860 "state": "online", 00:30:30.860 "raid_level": "raid1", 00:30:30.860 "superblock": false, 00:30:30.860 "num_base_bdevs": 4, 00:30:30.860 "num_base_bdevs_discovered": 4, 00:30:30.860 "num_base_bdevs_operational": 4, 00:30:30.860 "process": { 00:30:30.860 "type": "rebuild", 00:30:30.860 "target": "spare", 00:30:30.860 "progress": { 00:30:30.860 "blocks": 24576, 00:30:30.860 "percent": 37 00:30:30.860 } 00:30:30.860 }, 00:30:30.860 "base_bdevs_list": [ 00:30:30.860 { 00:30:30.860 "name": "spare", 00:30:30.860 "uuid": "52987a1e-fdeb-5994-8158-93cd55d4898e", 00:30:30.860 "is_configured": true, 00:30:30.860 "data_offset": 0, 00:30:30.860 "data_size": 65536 00:30:30.860 }, 00:30:30.860 { 00:30:30.860 "name": "BaseBdev2", 00:30:30.860 "uuid": "46d9045a-6945-583d-bd90-3623f59bae3f", 00:30:30.860 "is_configured": true, 00:30:30.860 "data_offset": 0, 00:30:30.860 "data_size": 65536 00:30:30.860 }, 00:30:30.860 { 00:30:30.860 "name": "BaseBdev3", 00:30:30.860 "uuid": "11d27add-4dd9-557e-afe0-de96410447b5", 00:30:30.860 "is_configured": true, 00:30:30.860 "data_offset": 0, 00:30:30.860 "data_size": 65536 00:30:30.860 }, 00:30:30.860 { 00:30:30.860 "name": "BaseBdev4", 00:30:30.860 "uuid": "e6657e4a-65bd-59ff-8b82-4938b9f9a2c9", 00:30:30.860 "is_configured": true, 00:30:30.860 "data_offset": 0, 00:30:30.860 "data_size": 65536 00:30:30.860 } 00:30:30.860 ] 00:30:30.860 }' 00:30:30.860 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:30.860 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:30.860 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:31.119 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:31.119 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:30:31.119 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:30:31.119 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:31.119 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:30:31.119 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:31.119 [2024-07-25 00:56:53.755217] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:31.378 [2024-07-25 00:56:53.778636] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:30:31.378 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:30:31.378 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:30:31.378 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:31.378 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:31.378 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:31.378 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:31.378 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:31.378 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.378 00:56:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.637 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:31.637 "name": "raid_bdev1", 00:30:31.637 "uuid": "4aee73ce-0d8c-4c29-bc85-7d43392a4245", 00:30:31.637 "strip_size_kb": 0, 00:30:31.637 "state": "online", 00:30:31.637 "raid_level": "raid1", 00:30:31.637 "superblock": false, 00:30:31.638 "num_base_bdevs": 4, 00:30:31.638 "num_base_bdevs_discovered": 3, 00:30:31.638 "num_base_bdevs_operational": 3, 00:30:31.638 "process": { 00:30:31.638 "type": "rebuild", 00:30:31.638 "target": "spare", 00:30:31.638 "progress": { 00:30:31.638 "blocks": 36864, 00:30:31.638 "percent": 56 00:30:31.638 } 00:30:31.638 }, 00:30:31.638 "base_bdevs_list": [ 00:30:31.638 { 00:30:31.638 "name": "spare", 00:30:31.638 "uuid": "52987a1e-fdeb-5994-8158-93cd55d4898e", 00:30:31.638 "is_configured": true, 00:30:31.638 "data_offset": 0, 00:30:31.638 "data_size": 65536 00:30:31.638 }, 00:30:31.638 { 00:30:31.638 "name": null, 00:30:31.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.638 "is_configured": false, 00:30:31.638 "data_offset": 0, 00:30:31.638 "data_size": 65536 00:30:31.638 }, 00:30:31.638 { 00:30:31.638 "name": "BaseBdev3", 00:30:31.638 "uuid": "11d27add-4dd9-557e-afe0-de96410447b5", 00:30:31.638 "is_configured": true, 00:30:31.638 "data_offset": 0, 00:30:31.638 "data_size": 65536 00:30:31.638 }, 00:30:31.638 { 00:30:31.638 "name": "BaseBdev4", 00:30:31.638 "uuid": "e6657e4a-65bd-59ff-8b82-4938b9f9a2c9", 00:30:31.638 "is_configured": true, 00:30:31.638 "data_offset": 0, 00:30:31.638 "data_size": 65536 00:30:31.638 } 00:30:31.638 ] 00:30:31.638 }' 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=891 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.638 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.897 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:31.897 "name": "raid_bdev1", 00:30:31.897 "uuid": "4aee73ce-0d8c-4c29-bc85-7d43392a4245", 00:30:31.897 "strip_size_kb": 0, 00:30:31.897 "state": "online", 00:30:31.897 "raid_level": "raid1", 00:30:31.897 "superblock": false, 00:30:31.897 "num_base_bdevs": 4, 00:30:31.897 "num_base_bdevs_discovered": 3, 00:30:31.897 "num_base_bdevs_operational": 3, 00:30:31.897 "process": { 00:30:31.897 "type": "rebuild", 00:30:31.897 "target": "spare", 00:30:31.897 "progress": { 00:30:31.897 "blocks": 43008, 00:30:31.897 "percent": 65 00:30:31.897 } 00:30:31.897 }, 00:30:31.897 "base_bdevs_list": [ 00:30:31.897 { 00:30:31.897 "name": "spare", 00:30:31.897 "uuid": "52987a1e-fdeb-5994-8158-93cd55d4898e", 00:30:31.897 "is_configured": true, 00:30:31.897 "data_offset": 0, 00:30:31.897 "data_size": 65536 00:30:31.897 }, 00:30:31.897 { 00:30:31.897 "name": null, 00:30:31.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.897 "is_configured": false, 00:30:31.897 "data_offset": 0, 00:30:31.897 "data_size": 65536 00:30:31.897 }, 00:30:31.897 { 00:30:31.897 "name": "BaseBdev3", 00:30:31.897 "uuid": "11d27add-4dd9-557e-afe0-de96410447b5", 00:30:31.897 "is_configured": true, 00:30:31.897 "data_offset": 0, 00:30:31.897 "data_size": 65536 00:30:31.897 }, 00:30:31.897 { 00:30:31.897 "name": "BaseBdev4", 00:30:31.897 "uuid": "e6657e4a-65bd-59ff-8b82-4938b9f9a2c9", 00:30:31.897 "is_configured": true, 00:30:31.897 "data_offset": 0, 00:30:31.897 "data_size": 65536 00:30:31.897 } 00:30:31.897 ] 00:30:31.897 }' 00:30:31.897 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:31.897 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:31.897 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:31.897 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:31.897 00:56:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:32.837 [2024-07-25 00:56:55.388461] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:32.837 [2024-07-25 00:56:55.388826] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:32.838 [2024-07-25 00:56:55.388996] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:32.838 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:32.838 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:32.838 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:32.838 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:32.838 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:32.838 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:32.838 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.838 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:33.097 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:33.097 "name": "raid_bdev1", 00:30:33.097 "uuid": "4aee73ce-0d8c-4c29-bc85-7d43392a4245", 00:30:33.097 "strip_size_kb": 0, 00:30:33.097 "state": "online", 00:30:33.097 "raid_level": "raid1", 00:30:33.097 "superblock": false, 00:30:33.097 "num_base_bdevs": 4, 00:30:33.097 "num_base_bdevs_discovered": 3, 00:30:33.097 "num_base_bdevs_operational": 3, 00:30:33.097 "base_bdevs_list": [ 00:30:33.097 { 00:30:33.097 "name": "spare", 00:30:33.097 "uuid": "52987a1e-fdeb-5994-8158-93cd55d4898e", 00:30:33.097 "is_configured": true, 00:30:33.097 "data_offset": 0, 00:30:33.097 "data_size": 65536 00:30:33.097 }, 00:30:33.097 { 00:30:33.097 "name": null, 00:30:33.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.097 "is_configured": false, 00:30:33.097 "data_offset": 0, 00:30:33.097 "data_size": 65536 00:30:33.097 }, 00:30:33.097 { 00:30:33.097 "name": "BaseBdev3", 00:30:33.097 "uuid": "11d27add-4dd9-557e-afe0-de96410447b5", 00:30:33.097 "is_configured": true, 00:30:33.097 "data_offset": 0, 00:30:33.097 "data_size": 65536 00:30:33.097 }, 00:30:33.097 { 00:30:33.097 "name": "BaseBdev4", 00:30:33.097 "uuid": "e6657e4a-65bd-59ff-8b82-4938b9f9a2c9", 00:30:33.097 "is_configured": true, 00:30:33.097 "data_offset": 0, 00:30:33.097 "data_size": 65536 00:30:33.097 } 00:30:33.097 ] 00:30:33.097 }' 00:30:33.097 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:33.356 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:33.356 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:33.356 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:33.356 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:30:33.356 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:33.356 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:33.356 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:33.356 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:33.356 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:33.356 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.356 00:56:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:33.616 "name": "raid_bdev1", 00:30:33.616 "uuid": "4aee73ce-0d8c-4c29-bc85-7d43392a4245", 00:30:33.616 "strip_size_kb": 0, 00:30:33.616 "state": "online", 00:30:33.616 "raid_level": "raid1", 00:30:33.616 "superblock": false, 00:30:33.616 "num_base_bdevs": 4, 00:30:33.616 "num_base_bdevs_discovered": 3, 00:30:33.616 "num_base_bdevs_operational": 3, 00:30:33.616 "base_bdevs_list": [ 00:30:33.616 { 00:30:33.616 "name": "spare", 00:30:33.616 "uuid": "52987a1e-fdeb-5994-8158-93cd55d4898e", 00:30:33.616 "is_configured": true, 00:30:33.616 "data_offset": 0, 00:30:33.616 "data_size": 65536 00:30:33.616 }, 00:30:33.616 { 00:30:33.616 "name": null, 00:30:33.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.616 "is_configured": false, 00:30:33.616 "data_offset": 0, 00:30:33.616 "data_size": 65536 00:30:33.616 }, 00:30:33.616 { 00:30:33.616 "name": "BaseBdev3", 00:30:33.616 "uuid": "11d27add-4dd9-557e-afe0-de96410447b5", 00:30:33.616 "is_configured": true, 00:30:33.616 "data_offset": 0, 00:30:33.616 "data_size": 65536 00:30:33.616 }, 00:30:33.616 { 00:30:33.616 "name": "BaseBdev4", 00:30:33.616 "uuid": "e6657e4a-65bd-59ff-8b82-4938b9f9a2c9", 00:30:33.616 "is_configured": true, 00:30:33.616 "data_offset": 0, 00:30:33.616 "data_size": 65536 00:30:33.616 } 00:30:33.616 ] 00:30:33.616 }' 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.616 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:33.874 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:33.874 "name": "raid_bdev1", 00:30:33.874 "uuid": "4aee73ce-0d8c-4c29-bc85-7d43392a4245", 00:30:33.874 "strip_size_kb": 0, 00:30:33.874 "state": "online", 00:30:33.874 "raid_level": "raid1", 00:30:33.874 "superblock": false, 00:30:33.874 "num_base_bdevs": 4, 00:30:33.874 "num_base_bdevs_discovered": 3, 00:30:33.874 "num_base_bdevs_operational": 3, 00:30:33.874 "base_bdevs_list": [ 00:30:33.874 { 00:30:33.874 "name": "spare", 00:30:33.874 "uuid": "52987a1e-fdeb-5994-8158-93cd55d4898e", 00:30:33.874 "is_configured": true, 00:30:33.874 "data_offset": 0, 00:30:33.874 "data_size": 65536 00:30:33.874 }, 00:30:33.874 { 00:30:33.874 "name": null, 00:30:33.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.874 "is_configured": false, 00:30:33.874 "data_offset": 0, 00:30:33.874 "data_size": 65536 00:30:33.874 }, 00:30:33.874 { 00:30:33.874 "name": "BaseBdev3", 00:30:33.874 "uuid": "11d27add-4dd9-557e-afe0-de96410447b5", 00:30:33.874 "is_configured": true, 00:30:33.874 "data_offset": 0, 00:30:33.874 "data_size": 65536 00:30:33.874 }, 00:30:33.874 { 00:30:33.874 "name": "BaseBdev4", 00:30:33.874 "uuid": "e6657e4a-65bd-59ff-8b82-4938b9f9a2c9", 00:30:33.874 "is_configured": true, 00:30:33.874 "data_offset": 0, 00:30:33.874 "data_size": 65536 00:30:33.874 } 00:30:33.874 ] 00:30:33.874 }' 00:30:33.874 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:33.874 00:56:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.442 00:56:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:34.442 [2024-07-25 00:56:57.046932] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:34.442 [2024-07-25 00:56:57.047210] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:34.442 [2024-07-25 00:56:57.047423] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:34.442 [2024-07-25 00:56:57.047563] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:34.442 [2024-07-25 00:56:57.047724] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:30:34.442 00:56:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:34.442 00:56:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:34.701 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:34.960 /dev/nbd0 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:34.960 1+0 records in 00:30:34.960 1+0 records out 00:30:34.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602149 s, 6.8 MB/s 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:34.960 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:35.220 /dev/nbd1 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:35.220 1+0 records in 00:30:35.220 1+0 records out 00:30:35.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675779 s, 6.1 MB/s 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:35.220 00:56:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:35.479 00:56:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:35.479 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:35.479 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:35.479 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:35.479 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:30:35.479 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:35.479 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:35.738 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:35.738 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:35.738 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:35.738 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:35.738 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.738 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:35.738 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:35.738 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:35.738 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:35.738 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 147409 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 147409 ']' 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 147409 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 147409 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 147409' 00:30:35.997 killing process with pid 147409 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 147409 00:30:35.997 Received shutdown signal, test time was about 60.000000 seconds 00:30:35.997 00:30:35.997 Latency(us) 00:30:35.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.997 =================================================================================================================== 00:30:35.997 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:35.997 00:56:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 147409 00:30:35.997 [2024-07-25 00:56:58.611983] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:36.565 [2024-07-25 00:56:59.067889] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:37.942 ************************************ 00:30:37.942 END TEST raid_rebuild_test 00:30:37.942 ************************************ 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:30:37.942 00:30:37.942 real 0m23.356s 00:30:37.942 user 0m31.356s 00:30:37.942 sys 0m4.095s 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.942 00:57:00 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:30:37.942 00:57:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:37.942 00:57:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:37.942 00:57:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:37.942 ************************************ 00:30:37.942 START TEST raid_rebuild_test_sb 00:30:37.942 ************************************ 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=147963 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 147963 /var/tmp/spdk-raid.sock 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 147963 ']' 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:37.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:37.942 00:57:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.942 [2024-07-25 00:57:00.465590] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:30:37.942 [2024-07-25 00:57:00.466030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147963 ] 00:30:37.942 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:37.942 Zero copy mechanism will not be used. 00:30:38.201 [2024-07-25 00:57:00.644249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.201 [2024-07-25 00:57:00.820620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.460 [2024-07-25 00:57:01.009999] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:38.717 00:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:38.717 00:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:30:38.717 00:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:38.717 00:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:38.975 BaseBdev1_malloc 00:30:38.975 00:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:39.232 [2024-07-25 00:57:01.842002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:39.232 [2024-07-25 00:57:01.842419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:39.232 [2024-07-25 00:57:01.842502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:30:39.232 [2024-07-25 00:57:01.842743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:39.232 [2024-07-25 00:57:01.845187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:39.232 [2024-07-25 00:57:01.845370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:39.232 BaseBdev1 00:30:39.232 00:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:39.232 00:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:39.490 BaseBdev2_malloc 00:30:39.490 00:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:39.748 [2024-07-25 00:57:02.266944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:39.748 [2024-07-25 00:57:02.267347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:39.748 [2024-07-25 00:57:02.267428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:30:39.748 [2024-07-25 00:57:02.267669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:39.748 [2024-07-25 00:57:02.269975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:39.748 [2024-07-25 00:57:02.270158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:39.748 BaseBdev2 00:30:39.748 00:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:39.748 00:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:40.007 BaseBdev3_malloc 00:30:40.007 00:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:40.265 [2024-07-25 00:57:02.768969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:40.265 [2024-07-25 00:57:02.769344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:40.265 [2024-07-25 00:57:02.769423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:40.265 [2024-07-25 00:57:02.769541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:40.265 [2024-07-25 00:57:02.771873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:40.265 [2024-07-25 00:57:02.772067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:40.265 BaseBdev3 00:30:40.265 00:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:40.265 00:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:40.524 BaseBdev4_malloc 00:30:40.524 00:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:40.782 [2024-07-25 00:57:03.243777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:40.782 [2024-07-25 00:57:03.244169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:40.782 [2024-07-25 00:57:03.244250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:30:40.782 [2024-07-25 00:57:03.244492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:40.782 [2024-07-25 00:57:03.246855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:40.782 [2024-07-25 00:57:03.247042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:40.782 BaseBdev4 00:30:40.782 00:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:41.040 spare_malloc 00:30:41.040 00:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:41.040 spare_delay 00:30:41.041 00:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:41.299 [2024-07-25 00:57:03.833854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:41.299 [2024-07-25 00:57:03.834175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:41.299 [2024-07-25 00:57:03.834264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:41.299 [2024-07-25 00:57:03.834527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:41.299 [2024-07-25 00:57:03.836915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:41.299 [2024-07-25 00:57:03.837103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:41.299 spare 00:30:41.299 00:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:41.557 [2024-07-25 00:57:04.081991] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:41.557 [2024-07-25 00:57:04.084190] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:41.557 [2024-07-25 00:57:04.084425] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:41.557 [2024-07-25 00:57:04.084518] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:41.557 [2024-07-25 00:57:04.084877] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:30:41.557 [2024-07-25 00:57:04.084923] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:41.557 [2024-07-25 00:57:04.085152] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:41.557 [2024-07-25 00:57:04.085728] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:30:41.557 [2024-07-25 00:57:04.085870] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:30:41.557 [2024-07-25 00:57:04.086196] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.557 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.817 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:41.817 "name": "raid_bdev1", 00:30:41.817 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:30:41.817 "strip_size_kb": 0, 00:30:41.817 "state": "online", 00:30:41.817 "raid_level": "raid1", 00:30:41.817 "superblock": true, 00:30:41.817 "num_base_bdevs": 4, 00:30:41.817 "num_base_bdevs_discovered": 4, 00:30:41.817 "num_base_bdevs_operational": 4, 00:30:41.817 "base_bdevs_list": [ 00:30:41.817 { 00:30:41.817 "name": "BaseBdev1", 00:30:41.817 "uuid": "732489a3-8c80-5126-bb1b-1343fb888924", 00:30:41.817 "is_configured": true, 00:30:41.817 "data_offset": 2048, 00:30:41.817 "data_size": 63488 00:30:41.817 }, 00:30:41.817 { 00:30:41.817 "name": "BaseBdev2", 00:30:41.817 "uuid": "8c86f283-c57a-505b-90fa-f7d80b709a9f", 00:30:41.817 "is_configured": true, 00:30:41.817 "data_offset": 2048, 00:30:41.817 "data_size": 63488 00:30:41.817 }, 00:30:41.817 { 00:30:41.817 "name": "BaseBdev3", 00:30:41.817 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:30:41.817 "is_configured": true, 00:30:41.817 "data_offset": 2048, 00:30:41.817 "data_size": 63488 00:30:41.817 }, 00:30:41.817 { 00:30:41.817 "name": "BaseBdev4", 00:30:41.817 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:30:41.817 "is_configured": true, 00:30:41.817 "data_offset": 2048, 00:30:41.817 "data_size": 63488 00:30:41.817 } 00:30:41.817 ] 00:30:41.817 }' 00:30:41.817 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:41.817 00:57:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.420 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:42.420 00:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:42.420 [2024-07-25 00:57:05.034574] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:42.420 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:30:42.420 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:42.420 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:42.680 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:42.939 [2024-07-25 00:57:05.390546] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:42.939 /dev/nbd0 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:42.939 1+0 records in 00:30:42.939 1+0 records out 00:30:42.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341116 s, 12.0 MB/s 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:30:42.939 00:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:30:49.508 63488+0 records in 00:30:49.508 63488+0 records out 00:30:49.508 32505856 bytes (33 MB, 31 MiB) copied, 5.51599 s, 5.9 MB/s 00:30:49.508 00:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:49.508 00:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:49.508 00:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:49.508 00:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:49.508 00:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:49.508 00:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:49.508 00:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:49.508 [2024-07-25 00:57:11.155648] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:49.508 [2024-07-25 00:57:11.343371] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.508 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:49.508 "name": "raid_bdev1", 00:30:49.508 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:30:49.508 "strip_size_kb": 0, 00:30:49.508 "state": "online", 00:30:49.508 "raid_level": "raid1", 00:30:49.508 "superblock": true, 00:30:49.508 "num_base_bdevs": 4, 00:30:49.508 "num_base_bdevs_discovered": 3, 00:30:49.508 "num_base_bdevs_operational": 3, 00:30:49.508 "base_bdevs_list": [ 00:30:49.508 { 00:30:49.508 "name": null, 00:30:49.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.508 "is_configured": false, 00:30:49.508 "data_offset": 2048, 00:30:49.508 "data_size": 63488 00:30:49.508 }, 00:30:49.508 { 00:30:49.508 "name": "BaseBdev2", 00:30:49.508 "uuid": "8c86f283-c57a-505b-90fa-f7d80b709a9f", 00:30:49.508 "is_configured": true, 00:30:49.508 "data_offset": 2048, 00:30:49.508 "data_size": 63488 00:30:49.508 }, 00:30:49.508 { 00:30:49.508 "name": "BaseBdev3", 00:30:49.508 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:30:49.508 "is_configured": true, 00:30:49.508 "data_offset": 2048, 00:30:49.508 "data_size": 63488 00:30:49.508 }, 00:30:49.508 { 00:30:49.508 "name": "BaseBdev4", 00:30:49.508 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:30:49.508 "is_configured": true, 00:30:49.508 "data_offset": 2048, 00:30:49.509 "data_size": 63488 00:30:49.509 } 00:30:49.509 ] 00:30:49.509 }' 00:30:49.509 00:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:49.509 00:57:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.791 00:57:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:50.053 [2024-07-25 00:57:12.479660] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:50.053 [2024-07-25 00:57:12.493320] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:30:50.053 [2024-07-25 00:57:12.495472] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:50.053 00:57:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:50.989 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:50.989 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:50.989 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:50.989 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:50.989 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:50.989 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.989 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.248 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:51.248 "name": "raid_bdev1", 00:30:51.248 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:30:51.248 "strip_size_kb": 0, 00:30:51.248 "state": "online", 00:30:51.248 "raid_level": "raid1", 00:30:51.248 "superblock": true, 00:30:51.248 "num_base_bdevs": 4, 00:30:51.248 "num_base_bdevs_discovered": 4, 00:30:51.248 "num_base_bdevs_operational": 4, 00:30:51.248 "process": { 00:30:51.248 "type": "rebuild", 00:30:51.248 "target": "spare", 00:30:51.248 "progress": { 00:30:51.248 "blocks": 22528, 00:30:51.248 "percent": 35 00:30:51.248 } 00:30:51.248 }, 00:30:51.248 "base_bdevs_list": [ 00:30:51.248 { 00:30:51.248 "name": "spare", 00:30:51.248 "uuid": "3fae248e-c0fc-5c23-b033-19eff0359b62", 00:30:51.248 "is_configured": true, 00:30:51.248 "data_offset": 2048, 00:30:51.248 "data_size": 63488 00:30:51.248 }, 00:30:51.248 { 00:30:51.248 "name": "BaseBdev2", 00:30:51.248 "uuid": "8c86f283-c57a-505b-90fa-f7d80b709a9f", 00:30:51.248 "is_configured": true, 00:30:51.248 "data_offset": 2048, 00:30:51.248 "data_size": 63488 00:30:51.248 }, 00:30:51.248 { 00:30:51.248 "name": "BaseBdev3", 00:30:51.248 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:30:51.248 "is_configured": true, 00:30:51.248 "data_offset": 2048, 00:30:51.248 "data_size": 63488 00:30:51.248 }, 00:30:51.248 { 00:30:51.248 "name": "BaseBdev4", 00:30:51.248 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:30:51.248 "is_configured": true, 00:30:51.248 "data_offset": 2048, 00:30:51.248 "data_size": 63488 00:30:51.248 } 00:30:51.248 ] 00:30:51.248 }' 00:30:51.248 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:51.248 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:51.248 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:51.248 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:51.248 00:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:51.507 [2024-07-25 00:57:14.001148] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:51.507 [2024-07-25 00:57:14.004242] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:51.507 [2024-07-25 00:57:14.004460] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:51.507 [2024-07-25 00:57:14.004514] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:51.507 [2024-07-25 00:57:14.004604] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:51.507 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:51.507 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:51.507 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:51.507 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:51.507 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:51.507 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:51.507 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:51.507 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:51.507 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:51.507 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:51.507 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.508 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.766 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:51.766 "name": "raid_bdev1", 00:30:51.766 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:30:51.766 "strip_size_kb": 0, 00:30:51.766 "state": "online", 00:30:51.766 "raid_level": "raid1", 00:30:51.766 "superblock": true, 00:30:51.766 "num_base_bdevs": 4, 00:30:51.766 "num_base_bdevs_discovered": 3, 00:30:51.766 "num_base_bdevs_operational": 3, 00:30:51.766 "base_bdevs_list": [ 00:30:51.766 { 00:30:51.766 "name": null, 00:30:51.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.766 "is_configured": false, 00:30:51.766 "data_offset": 2048, 00:30:51.766 "data_size": 63488 00:30:51.766 }, 00:30:51.766 { 00:30:51.766 "name": "BaseBdev2", 00:30:51.766 "uuid": "8c86f283-c57a-505b-90fa-f7d80b709a9f", 00:30:51.766 "is_configured": true, 00:30:51.766 "data_offset": 2048, 00:30:51.766 "data_size": 63488 00:30:51.766 }, 00:30:51.766 { 00:30:51.766 "name": "BaseBdev3", 00:30:51.766 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:30:51.766 "is_configured": true, 00:30:51.766 "data_offset": 2048, 00:30:51.766 "data_size": 63488 00:30:51.766 }, 00:30:51.766 { 00:30:51.766 "name": "BaseBdev4", 00:30:51.766 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:30:51.766 "is_configured": true, 00:30:51.766 "data_offset": 2048, 00:30:51.766 "data_size": 63488 00:30:51.767 } 00:30:51.767 ] 00:30:51.767 }' 00:30:51.767 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:51.767 00:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.334 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:52.334 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:52.334 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:52.334 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:52.334 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:52.334 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.334 00:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.594 00:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:52.594 "name": "raid_bdev1", 00:30:52.594 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:30:52.594 "strip_size_kb": 0, 00:30:52.594 "state": "online", 00:30:52.594 "raid_level": "raid1", 00:30:52.594 "superblock": true, 00:30:52.594 "num_base_bdevs": 4, 00:30:52.594 "num_base_bdevs_discovered": 3, 00:30:52.594 "num_base_bdevs_operational": 3, 00:30:52.594 "base_bdevs_list": [ 00:30:52.594 { 00:30:52.594 "name": null, 00:30:52.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.594 "is_configured": false, 00:30:52.594 "data_offset": 2048, 00:30:52.594 "data_size": 63488 00:30:52.594 }, 00:30:52.594 { 00:30:52.594 "name": "BaseBdev2", 00:30:52.594 "uuid": "8c86f283-c57a-505b-90fa-f7d80b709a9f", 00:30:52.594 "is_configured": true, 00:30:52.594 "data_offset": 2048, 00:30:52.594 "data_size": 63488 00:30:52.594 }, 00:30:52.594 { 00:30:52.594 "name": "BaseBdev3", 00:30:52.594 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:30:52.594 "is_configured": true, 00:30:52.594 "data_offset": 2048, 00:30:52.594 "data_size": 63488 00:30:52.594 }, 00:30:52.594 { 00:30:52.594 "name": "BaseBdev4", 00:30:52.594 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:30:52.594 "is_configured": true, 00:30:52.594 "data_offset": 2048, 00:30:52.594 "data_size": 63488 00:30:52.594 } 00:30:52.594 ] 00:30:52.594 }' 00:30:52.594 00:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:52.594 00:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:52.594 00:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:52.594 00:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:52.594 00:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:52.853 [2024-07-25 00:57:15.284845] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:52.853 [2024-07-25 00:57:15.299123] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:30:52.853 [2024-07-25 00:57:15.301190] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:52.853 00:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:53.790 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:53.790 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:53.790 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:53.790 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:53.790 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:53.790 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.790 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.049 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:54.049 "name": "raid_bdev1", 00:30:54.049 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:30:54.049 "strip_size_kb": 0, 00:30:54.049 "state": "online", 00:30:54.049 "raid_level": "raid1", 00:30:54.049 "superblock": true, 00:30:54.049 "num_base_bdevs": 4, 00:30:54.049 "num_base_bdevs_discovered": 4, 00:30:54.049 "num_base_bdevs_operational": 4, 00:30:54.049 "process": { 00:30:54.049 "type": "rebuild", 00:30:54.049 "target": "spare", 00:30:54.049 "progress": { 00:30:54.049 "blocks": 24576, 00:30:54.049 "percent": 38 00:30:54.049 } 00:30:54.049 }, 00:30:54.049 "base_bdevs_list": [ 00:30:54.049 { 00:30:54.049 "name": "spare", 00:30:54.049 "uuid": "3fae248e-c0fc-5c23-b033-19eff0359b62", 00:30:54.049 "is_configured": true, 00:30:54.049 "data_offset": 2048, 00:30:54.049 "data_size": 63488 00:30:54.049 }, 00:30:54.049 { 00:30:54.049 "name": "BaseBdev2", 00:30:54.049 "uuid": "8c86f283-c57a-505b-90fa-f7d80b709a9f", 00:30:54.049 "is_configured": true, 00:30:54.049 "data_offset": 2048, 00:30:54.049 "data_size": 63488 00:30:54.049 }, 00:30:54.049 { 00:30:54.049 "name": "BaseBdev3", 00:30:54.049 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:30:54.049 "is_configured": true, 00:30:54.049 "data_offset": 2048, 00:30:54.049 "data_size": 63488 00:30:54.049 }, 00:30:54.049 { 00:30:54.049 "name": "BaseBdev4", 00:30:54.049 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:30:54.049 "is_configured": true, 00:30:54.049 "data_offset": 2048, 00:30:54.049 "data_size": 63488 00:30:54.049 } 00:30:54.049 ] 00:30:54.049 }' 00:30:54.049 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:54.049 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:54.049 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:54.049 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:54.049 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:30:54.049 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:30:54.049 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:30:54.049 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:30:54.049 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:54.049 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:30:54.049 00:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:54.307 [2024-07-25 00:57:16.935420] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:54.565 [2024-07-25 00:57:17.111317] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:30:54.565 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:30:54.565 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:30:54.565 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:54.565 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:54.565 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:54.565 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:54.565 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:54.565 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.565 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:54.824 "name": "raid_bdev1", 00:30:54.824 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:30:54.824 "strip_size_kb": 0, 00:30:54.824 "state": "online", 00:30:54.824 "raid_level": "raid1", 00:30:54.824 "superblock": true, 00:30:54.824 "num_base_bdevs": 4, 00:30:54.824 "num_base_bdevs_discovered": 3, 00:30:54.824 "num_base_bdevs_operational": 3, 00:30:54.824 "process": { 00:30:54.824 "type": "rebuild", 00:30:54.824 "target": "spare", 00:30:54.824 "progress": { 00:30:54.824 "blocks": 38912, 00:30:54.824 "percent": 61 00:30:54.824 } 00:30:54.824 }, 00:30:54.824 "base_bdevs_list": [ 00:30:54.824 { 00:30:54.824 "name": "spare", 00:30:54.824 "uuid": "3fae248e-c0fc-5c23-b033-19eff0359b62", 00:30:54.824 "is_configured": true, 00:30:54.824 "data_offset": 2048, 00:30:54.824 "data_size": 63488 00:30:54.824 }, 00:30:54.824 { 00:30:54.824 "name": null, 00:30:54.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.824 "is_configured": false, 00:30:54.824 "data_offset": 2048, 00:30:54.824 "data_size": 63488 00:30:54.824 }, 00:30:54.824 { 00:30:54.824 "name": "BaseBdev3", 00:30:54.824 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:30:54.824 "is_configured": true, 00:30:54.824 "data_offset": 2048, 00:30:54.824 "data_size": 63488 00:30:54.824 }, 00:30:54.824 { 00:30:54.824 "name": "BaseBdev4", 00:30:54.824 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:30:54.824 "is_configured": true, 00:30:54.824 "data_offset": 2048, 00:30:54.824 "data_size": 63488 00:30:54.824 } 00:30:54.824 ] 00:30:54.824 }' 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=914 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.824 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.083 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:55.083 "name": "raid_bdev1", 00:30:55.083 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:30:55.083 "strip_size_kb": 0, 00:30:55.083 "state": "online", 00:30:55.083 "raid_level": "raid1", 00:30:55.083 "superblock": true, 00:30:55.083 "num_base_bdevs": 4, 00:30:55.083 "num_base_bdevs_discovered": 3, 00:30:55.083 "num_base_bdevs_operational": 3, 00:30:55.083 "process": { 00:30:55.083 "type": "rebuild", 00:30:55.083 "target": "spare", 00:30:55.083 "progress": { 00:30:55.083 "blocks": 45056, 00:30:55.083 "percent": 70 00:30:55.083 } 00:30:55.083 }, 00:30:55.083 "base_bdevs_list": [ 00:30:55.083 { 00:30:55.083 "name": "spare", 00:30:55.083 "uuid": "3fae248e-c0fc-5c23-b033-19eff0359b62", 00:30:55.083 "is_configured": true, 00:30:55.083 "data_offset": 2048, 00:30:55.083 "data_size": 63488 00:30:55.083 }, 00:30:55.083 { 00:30:55.083 "name": null, 00:30:55.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.083 "is_configured": false, 00:30:55.083 "data_offset": 2048, 00:30:55.083 "data_size": 63488 00:30:55.083 }, 00:30:55.083 { 00:30:55.083 "name": "BaseBdev3", 00:30:55.083 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:30:55.083 "is_configured": true, 00:30:55.083 "data_offset": 2048, 00:30:55.083 "data_size": 63488 00:30:55.083 }, 00:30:55.083 { 00:30:55.083 "name": "BaseBdev4", 00:30:55.083 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:30:55.083 "is_configured": true, 00:30:55.083 "data_offset": 2048, 00:30:55.083 "data_size": 63488 00:30:55.083 } 00:30:55.083 ] 00:30:55.083 }' 00:30:55.083 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:55.083 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:55.083 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:55.341 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:55.341 00:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:55.953 [2024-07-25 00:57:18.519823] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:55.953 [2024-07-25 00:57:18.520039] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:55.953 [2024-07-25 00:57:18.520280] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:56.212 00:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:56.212 00:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:56.212 00:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:56.212 00:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:56.212 00:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:56.212 00:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:56.212 00:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.212 00:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.471 00:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:56.471 "name": "raid_bdev1", 00:30:56.471 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:30:56.471 "strip_size_kb": 0, 00:30:56.471 "state": "online", 00:30:56.471 "raid_level": "raid1", 00:30:56.471 "superblock": true, 00:30:56.471 "num_base_bdevs": 4, 00:30:56.471 "num_base_bdevs_discovered": 3, 00:30:56.471 "num_base_bdevs_operational": 3, 00:30:56.471 "base_bdevs_list": [ 00:30:56.471 { 00:30:56.471 "name": "spare", 00:30:56.471 "uuid": "3fae248e-c0fc-5c23-b033-19eff0359b62", 00:30:56.471 "is_configured": true, 00:30:56.471 "data_offset": 2048, 00:30:56.471 "data_size": 63488 00:30:56.471 }, 00:30:56.471 { 00:30:56.471 "name": null, 00:30:56.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.471 "is_configured": false, 00:30:56.471 "data_offset": 2048, 00:30:56.471 "data_size": 63488 00:30:56.471 }, 00:30:56.471 { 00:30:56.471 "name": "BaseBdev3", 00:30:56.471 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:30:56.471 "is_configured": true, 00:30:56.471 "data_offset": 2048, 00:30:56.471 "data_size": 63488 00:30:56.471 }, 00:30:56.471 { 00:30:56.471 "name": "BaseBdev4", 00:30:56.471 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:30:56.471 "is_configured": true, 00:30:56.471 "data_offset": 2048, 00:30:56.471 "data_size": 63488 00:30:56.471 } 00:30:56.471 ] 00:30:56.471 }' 00:30:56.471 00:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:56.471 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:56.471 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:56.471 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:56.471 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:30:56.471 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:56.471 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:56.471 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:56.471 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:56.471 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:56.471 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.471 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.731 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:56.731 "name": "raid_bdev1", 00:30:56.731 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:30:56.731 "strip_size_kb": 0, 00:30:56.731 "state": "online", 00:30:56.731 "raid_level": "raid1", 00:30:56.731 "superblock": true, 00:30:56.731 "num_base_bdevs": 4, 00:30:56.731 "num_base_bdevs_discovered": 3, 00:30:56.731 "num_base_bdevs_operational": 3, 00:30:56.731 "base_bdevs_list": [ 00:30:56.731 { 00:30:56.731 "name": "spare", 00:30:56.731 "uuid": "3fae248e-c0fc-5c23-b033-19eff0359b62", 00:30:56.731 "is_configured": true, 00:30:56.731 "data_offset": 2048, 00:30:56.731 "data_size": 63488 00:30:56.731 }, 00:30:56.731 { 00:30:56.731 "name": null, 00:30:56.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.731 "is_configured": false, 00:30:56.731 "data_offset": 2048, 00:30:56.731 "data_size": 63488 00:30:56.731 }, 00:30:56.731 { 00:30:56.731 "name": "BaseBdev3", 00:30:56.731 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:30:56.731 "is_configured": true, 00:30:56.731 "data_offset": 2048, 00:30:56.731 "data_size": 63488 00:30:56.731 }, 00:30:56.731 { 00:30:56.731 "name": "BaseBdev4", 00:30:56.731 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:30:56.731 "is_configured": true, 00:30:56.731 "data_offset": 2048, 00:30:56.731 "data_size": 63488 00:30:56.731 } 00:30:56.731 ] 00:30:56.731 }' 00:30:56.731 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:56.731 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:56.731 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:56.990 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:56.990 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:56.990 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:56.990 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:56.990 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:56.990 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:56.990 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:56.990 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:56.991 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:56.991 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:56.991 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:56.991 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.991 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.250 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:57.250 "name": "raid_bdev1", 00:30:57.250 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:30:57.250 "strip_size_kb": 0, 00:30:57.250 "state": "online", 00:30:57.250 "raid_level": "raid1", 00:30:57.250 "superblock": true, 00:30:57.250 "num_base_bdevs": 4, 00:30:57.250 "num_base_bdevs_discovered": 3, 00:30:57.250 "num_base_bdevs_operational": 3, 00:30:57.250 "base_bdevs_list": [ 00:30:57.250 { 00:30:57.250 "name": "spare", 00:30:57.250 "uuid": "3fae248e-c0fc-5c23-b033-19eff0359b62", 00:30:57.250 "is_configured": true, 00:30:57.250 "data_offset": 2048, 00:30:57.250 "data_size": 63488 00:30:57.250 }, 00:30:57.250 { 00:30:57.250 "name": null, 00:30:57.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.250 "is_configured": false, 00:30:57.250 "data_offset": 2048, 00:30:57.250 "data_size": 63488 00:30:57.250 }, 00:30:57.250 { 00:30:57.250 "name": "BaseBdev3", 00:30:57.250 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:30:57.250 "is_configured": true, 00:30:57.250 "data_offset": 2048, 00:30:57.250 "data_size": 63488 00:30:57.250 }, 00:30:57.250 { 00:30:57.250 "name": "BaseBdev4", 00:30:57.250 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:30:57.250 "is_configured": true, 00:30:57.250 "data_offset": 2048, 00:30:57.250 "data_size": 63488 00:30:57.250 } 00:30:57.250 ] 00:30:57.250 }' 00:30:57.250 00:57:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:57.250 00:57:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.818 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:57.818 [2024-07-25 00:57:20.443983] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:57.818 [2024-07-25 00:57:20.444177] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:57.818 [2024-07-25 00:57:20.444406] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:57.818 [2024-07-25 00:57:20.444602] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:57.818 [2024-07-25 00:57:20.444701] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:30:57.818 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.818 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:58.386 /dev/nbd0 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:58.386 1+0 records in 00:30:58.386 1+0 records out 00:30:58.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361089 s, 11.3 MB/s 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:58.386 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:58.387 00:57:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:58.645 /dev/nbd1 00:30:58.645 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:58.645 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:58.645 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:58.645 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:58.645 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:58.645 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:58.645 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:58.645 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:58.645 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:58.645 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:58.646 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:58.646 1+0 records in 00:30:58.646 1+0 records out 00:30:58.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248551 s, 16.5 MB/s 00:30:58.646 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:58.646 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:58.646 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:58.646 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:58.646 00:57:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:58.646 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:58.646 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:58.646 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:58.904 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:58.904 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:58.904 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:58.905 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:58.905 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:58.905 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:58.905 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:59.163 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:59.163 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:59.163 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:59.163 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:59.163 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:59.163 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:59.163 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:59.163 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:59.163 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:59.163 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:59.422 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:59.422 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:59.422 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:59.422 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:59.422 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:59.422 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:59.422 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:59.422 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:59.422 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:30:59.422 00:57:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:59.681 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:59.940 [2024-07-25 00:57:22.421381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:59.940 [2024-07-25 00:57:22.421481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:59.940 [2024-07-25 00:57:22.421533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:30:59.940 [2024-07-25 00:57:22.421565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:59.940 [2024-07-25 00:57:22.423959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:59.940 [2024-07-25 00:57:22.424022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:59.941 [2024-07-25 00:57:22.424142] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:59.941 [2024-07-25 00:57:22.424193] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:59.941 [2024-07-25 00:57:22.424349] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:59.941 [2024-07-25 00:57:22.424432] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:59.941 spare 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.941 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.941 [2024-07-25 00:57:22.524506] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:30:59.941 [2024-07-25 00:57:22.524535] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:59.941 [2024-07-25 00:57:22.524706] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:30:59.941 [2024-07-25 00:57:22.525093] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:30:59.941 [2024-07-25 00:57:22.525116] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:30:59.941 [2024-07-25 00:57:22.525270] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:00.200 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:00.200 "name": "raid_bdev1", 00:31:00.200 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:00.200 "strip_size_kb": 0, 00:31:00.200 "state": "online", 00:31:00.200 "raid_level": "raid1", 00:31:00.200 "superblock": true, 00:31:00.200 "num_base_bdevs": 4, 00:31:00.200 "num_base_bdevs_discovered": 3, 00:31:00.200 "num_base_bdevs_operational": 3, 00:31:00.200 "base_bdevs_list": [ 00:31:00.200 { 00:31:00.200 "name": "spare", 00:31:00.200 "uuid": "3fae248e-c0fc-5c23-b033-19eff0359b62", 00:31:00.200 "is_configured": true, 00:31:00.200 "data_offset": 2048, 00:31:00.200 "data_size": 63488 00:31:00.200 }, 00:31:00.200 { 00:31:00.200 "name": null, 00:31:00.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.200 "is_configured": false, 00:31:00.200 "data_offset": 2048, 00:31:00.200 "data_size": 63488 00:31:00.200 }, 00:31:00.200 { 00:31:00.200 "name": "BaseBdev3", 00:31:00.200 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:00.200 "is_configured": true, 00:31:00.200 "data_offset": 2048, 00:31:00.200 "data_size": 63488 00:31:00.200 }, 00:31:00.200 { 00:31:00.200 "name": "BaseBdev4", 00:31:00.200 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:00.200 "is_configured": true, 00:31:00.200 "data_offset": 2048, 00:31:00.200 "data_size": 63488 00:31:00.200 } 00:31:00.200 ] 00:31:00.200 }' 00:31:00.200 00:57:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:00.200 00:57:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:00.768 "name": "raid_bdev1", 00:31:00.768 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:00.768 "strip_size_kb": 0, 00:31:00.768 "state": "online", 00:31:00.768 "raid_level": "raid1", 00:31:00.768 "superblock": true, 00:31:00.768 "num_base_bdevs": 4, 00:31:00.768 "num_base_bdevs_discovered": 3, 00:31:00.768 "num_base_bdevs_operational": 3, 00:31:00.768 "base_bdevs_list": [ 00:31:00.768 { 00:31:00.768 "name": "spare", 00:31:00.768 "uuid": "3fae248e-c0fc-5c23-b033-19eff0359b62", 00:31:00.768 "is_configured": true, 00:31:00.768 "data_offset": 2048, 00:31:00.768 "data_size": 63488 00:31:00.768 }, 00:31:00.768 { 00:31:00.768 "name": null, 00:31:00.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.768 "is_configured": false, 00:31:00.768 "data_offset": 2048, 00:31:00.768 "data_size": 63488 00:31:00.768 }, 00:31:00.768 { 00:31:00.768 "name": "BaseBdev3", 00:31:00.768 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:00.768 "is_configured": true, 00:31:00.768 "data_offset": 2048, 00:31:00.768 "data_size": 63488 00:31:00.768 }, 00:31:00.768 { 00:31:00.768 "name": "BaseBdev4", 00:31:00.768 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:00.768 "is_configured": true, 00:31:00.768 "data_offset": 2048, 00:31:00.768 "data_size": 63488 00:31:00.768 } 00:31:00.768 ] 00:31:00.768 }' 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:00.768 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.026 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:31:01.026 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:01.284 [2024-07-25 00:57:23.821700] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:01.284 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:01.284 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:01.285 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:01.285 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:01.285 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:01.285 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:01.285 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:01.285 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:01.285 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:01.285 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:01.285 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.285 00:57:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.544 00:57:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:01.544 "name": "raid_bdev1", 00:31:01.544 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:01.544 "strip_size_kb": 0, 00:31:01.544 "state": "online", 00:31:01.544 "raid_level": "raid1", 00:31:01.544 "superblock": true, 00:31:01.544 "num_base_bdevs": 4, 00:31:01.544 "num_base_bdevs_discovered": 2, 00:31:01.544 "num_base_bdevs_operational": 2, 00:31:01.544 "base_bdevs_list": [ 00:31:01.544 { 00:31:01.544 "name": null, 00:31:01.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.544 "is_configured": false, 00:31:01.544 "data_offset": 2048, 00:31:01.544 "data_size": 63488 00:31:01.544 }, 00:31:01.544 { 00:31:01.544 "name": null, 00:31:01.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.544 "is_configured": false, 00:31:01.544 "data_offset": 2048, 00:31:01.544 "data_size": 63488 00:31:01.544 }, 00:31:01.544 { 00:31:01.544 "name": "BaseBdev3", 00:31:01.544 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:01.544 "is_configured": true, 00:31:01.544 "data_offset": 2048, 00:31:01.544 "data_size": 63488 00:31:01.544 }, 00:31:01.544 { 00:31:01.544 "name": "BaseBdev4", 00:31:01.544 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:01.544 "is_configured": true, 00:31:01.544 "data_offset": 2048, 00:31:01.544 "data_size": 63488 00:31:01.544 } 00:31:01.544 ] 00:31:01.544 }' 00:31:01.544 00:57:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:01.544 00:57:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:02.125 00:57:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:02.394 [2024-07-25 00:57:24.769883] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:02.394 [2024-07-25 00:57:24.770083] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:02.394 [2024-07-25 00:57:24.770098] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:02.394 [2024-07-25 00:57:24.770156] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:02.394 [2024-07-25 00:57:24.782987] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:31:02.394 [2024-07-25 00:57:24.784940] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:02.394 00:57:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:31:03.328 00:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:03.328 00:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:03.328 00:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:03.328 00:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:03.328 00:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:03.328 00:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.328 00:57:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.586 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:03.586 "name": "raid_bdev1", 00:31:03.586 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:03.586 "strip_size_kb": 0, 00:31:03.586 "state": "online", 00:31:03.586 "raid_level": "raid1", 00:31:03.586 "superblock": true, 00:31:03.586 "num_base_bdevs": 4, 00:31:03.586 "num_base_bdevs_discovered": 3, 00:31:03.586 "num_base_bdevs_operational": 3, 00:31:03.586 "process": { 00:31:03.586 "type": "rebuild", 00:31:03.586 "target": "spare", 00:31:03.586 "progress": { 00:31:03.586 "blocks": 24576, 00:31:03.586 "percent": 38 00:31:03.586 } 00:31:03.586 }, 00:31:03.586 "base_bdevs_list": [ 00:31:03.586 { 00:31:03.586 "name": "spare", 00:31:03.586 "uuid": "3fae248e-c0fc-5c23-b033-19eff0359b62", 00:31:03.586 "is_configured": true, 00:31:03.586 "data_offset": 2048, 00:31:03.586 "data_size": 63488 00:31:03.586 }, 00:31:03.586 { 00:31:03.586 "name": null, 00:31:03.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.586 "is_configured": false, 00:31:03.586 "data_offset": 2048, 00:31:03.586 "data_size": 63488 00:31:03.586 }, 00:31:03.586 { 00:31:03.586 "name": "BaseBdev3", 00:31:03.586 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:03.586 "is_configured": true, 00:31:03.586 "data_offset": 2048, 00:31:03.586 "data_size": 63488 00:31:03.586 }, 00:31:03.586 { 00:31:03.586 "name": "BaseBdev4", 00:31:03.586 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:03.586 "is_configured": true, 00:31:03.586 "data_offset": 2048, 00:31:03.586 "data_size": 63488 00:31:03.586 } 00:31:03.586 ] 00:31:03.586 }' 00:31:03.586 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:03.586 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:03.586 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:03.586 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:03.586 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:03.844 [2024-07-25 00:57:26.395457] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:03.844 [2024-07-25 00:57:26.395984] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:03.844 [2024-07-25 00:57:26.396066] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:03.844 [2024-07-25 00:57:26.396083] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:03.844 [2024-07-25 00:57:26.396092] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.844 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.102 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:04.102 "name": "raid_bdev1", 00:31:04.102 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:04.102 "strip_size_kb": 0, 00:31:04.102 "state": "online", 00:31:04.102 "raid_level": "raid1", 00:31:04.102 "superblock": true, 00:31:04.102 "num_base_bdevs": 4, 00:31:04.102 "num_base_bdevs_discovered": 2, 00:31:04.102 "num_base_bdevs_operational": 2, 00:31:04.102 "base_bdevs_list": [ 00:31:04.102 { 00:31:04.102 "name": null, 00:31:04.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.102 "is_configured": false, 00:31:04.102 "data_offset": 2048, 00:31:04.102 "data_size": 63488 00:31:04.102 }, 00:31:04.102 { 00:31:04.102 "name": null, 00:31:04.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.102 "is_configured": false, 00:31:04.102 "data_offset": 2048, 00:31:04.102 "data_size": 63488 00:31:04.102 }, 00:31:04.102 { 00:31:04.102 "name": "BaseBdev3", 00:31:04.102 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:04.102 "is_configured": true, 00:31:04.103 "data_offset": 2048, 00:31:04.103 "data_size": 63488 00:31:04.103 }, 00:31:04.103 { 00:31:04.103 "name": "BaseBdev4", 00:31:04.103 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:04.103 "is_configured": true, 00:31:04.103 "data_offset": 2048, 00:31:04.103 "data_size": 63488 00:31:04.103 } 00:31:04.103 ] 00:31:04.103 }' 00:31:04.103 00:57:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:04.103 00:57:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.669 00:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:04.928 [2024-07-25 00:57:27.461866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:04.928 [2024-07-25 00:57:27.461981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:04.928 [2024-07-25 00:57:27.462030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:31:04.928 [2024-07-25 00:57:27.462059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:04.928 [2024-07-25 00:57:27.462702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:04.928 [2024-07-25 00:57:27.462749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:04.928 [2024-07-25 00:57:27.462887] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:04.928 [2024-07-25 00:57:27.462902] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:04.928 [2024-07-25 00:57:27.462911] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:04.928 [2024-07-25 00:57:27.462945] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:04.928 [2024-07-25 00:57:27.477519] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc23d0 00:31:04.928 spare 00:31:04.928 [2024-07-25 00:57:27.479884] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:04.928 00:57:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:31:05.864 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:05.864 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:05.864 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:05.864 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:05.864 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:05.864 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:05.864 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.122 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:06.122 "name": "raid_bdev1", 00:31:06.122 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:06.122 "strip_size_kb": 0, 00:31:06.122 "state": "online", 00:31:06.122 "raid_level": "raid1", 00:31:06.122 "superblock": true, 00:31:06.122 "num_base_bdevs": 4, 00:31:06.122 "num_base_bdevs_discovered": 3, 00:31:06.122 "num_base_bdevs_operational": 3, 00:31:06.122 "process": { 00:31:06.122 "type": "rebuild", 00:31:06.122 "target": "spare", 00:31:06.122 "progress": { 00:31:06.122 "blocks": 24576, 00:31:06.122 "percent": 38 00:31:06.122 } 00:31:06.122 }, 00:31:06.122 "base_bdevs_list": [ 00:31:06.122 { 00:31:06.122 "name": "spare", 00:31:06.122 "uuid": "3fae248e-c0fc-5c23-b033-19eff0359b62", 00:31:06.122 "is_configured": true, 00:31:06.122 "data_offset": 2048, 00:31:06.122 "data_size": 63488 00:31:06.122 }, 00:31:06.122 { 00:31:06.122 "name": null, 00:31:06.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.122 "is_configured": false, 00:31:06.122 "data_offset": 2048, 00:31:06.122 "data_size": 63488 00:31:06.122 }, 00:31:06.122 { 00:31:06.122 "name": "BaseBdev3", 00:31:06.122 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:06.122 "is_configured": true, 00:31:06.122 "data_offset": 2048, 00:31:06.122 "data_size": 63488 00:31:06.122 }, 00:31:06.122 { 00:31:06.122 "name": "BaseBdev4", 00:31:06.122 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:06.122 "is_configured": true, 00:31:06.122 "data_offset": 2048, 00:31:06.122 "data_size": 63488 00:31:06.122 } 00:31:06.122 ] 00:31:06.122 }' 00:31:06.122 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:06.381 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:06.381 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:06.381 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:06.381 00:57:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:06.640 [2024-07-25 00:57:29.078591] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:06.640 [2024-07-25 00:57:29.093219] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:06.640 [2024-07-25 00:57:29.093298] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:06.640 [2024-07-25 00:57:29.093315] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:06.640 [2024-07-25 00:57:29.093323] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.640 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.899 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:06.899 "name": "raid_bdev1", 00:31:06.899 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:06.899 "strip_size_kb": 0, 00:31:06.899 "state": "online", 00:31:06.899 "raid_level": "raid1", 00:31:06.899 "superblock": true, 00:31:06.899 "num_base_bdevs": 4, 00:31:06.899 "num_base_bdevs_discovered": 2, 00:31:06.899 "num_base_bdevs_operational": 2, 00:31:06.899 "base_bdevs_list": [ 00:31:06.899 { 00:31:06.899 "name": null, 00:31:06.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.899 "is_configured": false, 00:31:06.899 "data_offset": 2048, 00:31:06.899 "data_size": 63488 00:31:06.899 }, 00:31:06.899 { 00:31:06.899 "name": null, 00:31:06.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.899 "is_configured": false, 00:31:06.899 "data_offset": 2048, 00:31:06.899 "data_size": 63488 00:31:06.899 }, 00:31:06.899 { 00:31:06.899 "name": "BaseBdev3", 00:31:06.899 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:06.899 "is_configured": true, 00:31:06.899 "data_offset": 2048, 00:31:06.899 "data_size": 63488 00:31:06.899 }, 00:31:06.899 { 00:31:06.899 "name": "BaseBdev4", 00:31:06.899 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:06.899 "is_configured": true, 00:31:06.899 "data_offset": 2048, 00:31:06.899 "data_size": 63488 00:31:06.899 } 00:31:06.899 ] 00:31:06.899 }' 00:31:06.899 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:06.899 00:57:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.466 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:07.466 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:07.466 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:07.466 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:07.466 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:07.466 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:07.466 00:57:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.726 00:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:07.726 "name": "raid_bdev1", 00:31:07.726 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:07.726 "strip_size_kb": 0, 00:31:07.726 "state": "online", 00:31:07.726 "raid_level": "raid1", 00:31:07.726 "superblock": true, 00:31:07.726 "num_base_bdevs": 4, 00:31:07.726 "num_base_bdevs_discovered": 2, 00:31:07.726 "num_base_bdevs_operational": 2, 00:31:07.726 "base_bdevs_list": [ 00:31:07.726 { 00:31:07.726 "name": null, 00:31:07.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:07.726 "is_configured": false, 00:31:07.726 "data_offset": 2048, 00:31:07.726 "data_size": 63488 00:31:07.726 }, 00:31:07.726 { 00:31:07.726 "name": null, 00:31:07.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:07.726 "is_configured": false, 00:31:07.726 "data_offset": 2048, 00:31:07.726 "data_size": 63488 00:31:07.726 }, 00:31:07.726 { 00:31:07.726 "name": "BaseBdev3", 00:31:07.726 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:07.726 "is_configured": true, 00:31:07.726 "data_offset": 2048, 00:31:07.726 "data_size": 63488 00:31:07.726 }, 00:31:07.726 { 00:31:07.726 "name": "BaseBdev4", 00:31:07.726 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:07.726 "is_configured": true, 00:31:07.726 "data_offset": 2048, 00:31:07.726 "data_size": 63488 00:31:07.726 } 00:31:07.726 ] 00:31:07.726 }' 00:31:07.726 00:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:07.726 00:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:07.726 00:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:07.726 00:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:07.726 00:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:07.985 00:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:08.244 [2024-07-25 00:57:30.691090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:08.244 [2024-07-25 00:57:30.691189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:08.244 [2024-07-25 00:57:30.691238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:31:08.244 [2024-07-25 00:57:30.691262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:08.244 [2024-07-25 00:57:30.691778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:08.244 [2024-07-25 00:57:30.691816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:08.244 [2024-07-25 00:57:30.691957] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:08.244 [2024-07-25 00:57:30.691970] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:08.244 [2024-07-25 00:57:30.691978] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:08.244 BaseBdev1 00:31:08.244 00:57:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.182 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.441 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:09.441 "name": "raid_bdev1", 00:31:09.441 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:09.441 "strip_size_kb": 0, 00:31:09.441 "state": "online", 00:31:09.441 "raid_level": "raid1", 00:31:09.441 "superblock": true, 00:31:09.441 "num_base_bdevs": 4, 00:31:09.441 "num_base_bdevs_discovered": 2, 00:31:09.441 "num_base_bdevs_operational": 2, 00:31:09.441 "base_bdevs_list": [ 00:31:09.441 { 00:31:09.441 "name": null, 00:31:09.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.441 "is_configured": false, 00:31:09.441 "data_offset": 2048, 00:31:09.441 "data_size": 63488 00:31:09.441 }, 00:31:09.441 { 00:31:09.441 "name": null, 00:31:09.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.441 "is_configured": false, 00:31:09.441 "data_offset": 2048, 00:31:09.441 "data_size": 63488 00:31:09.441 }, 00:31:09.441 { 00:31:09.441 "name": "BaseBdev3", 00:31:09.441 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:09.441 "is_configured": true, 00:31:09.441 "data_offset": 2048, 00:31:09.441 "data_size": 63488 00:31:09.441 }, 00:31:09.441 { 00:31:09.441 "name": "BaseBdev4", 00:31:09.441 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:09.441 "is_configured": true, 00:31:09.441 "data_offset": 2048, 00:31:09.441 "data_size": 63488 00:31:09.441 } 00:31:09.441 ] 00:31:09.441 }' 00:31:09.441 00:57:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:09.441 00:57:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:10.009 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:10.009 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:10.009 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:10.009 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:10.009 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:10.009 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.009 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:10.269 "name": "raid_bdev1", 00:31:10.269 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:10.269 "strip_size_kb": 0, 00:31:10.269 "state": "online", 00:31:10.269 "raid_level": "raid1", 00:31:10.269 "superblock": true, 00:31:10.269 "num_base_bdevs": 4, 00:31:10.269 "num_base_bdevs_discovered": 2, 00:31:10.269 "num_base_bdevs_operational": 2, 00:31:10.269 "base_bdevs_list": [ 00:31:10.269 { 00:31:10.269 "name": null, 00:31:10.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.269 "is_configured": false, 00:31:10.269 "data_offset": 2048, 00:31:10.269 "data_size": 63488 00:31:10.269 }, 00:31:10.269 { 00:31:10.269 "name": null, 00:31:10.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.269 "is_configured": false, 00:31:10.269 "data_offset": 2048, 00:31:10.269 "data_size": 63488 00:31:10.269 }, 00:31:10.269 { 00:31:10.269 "name": "BaseBdev3", 00:31:10.269 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:10.269 "is_configured": true, 00:31:10.269 "data_offset": 2048, 00:31:10.269 "data_size": 63488 00:31:10.269 }, 00:31:10.269 { 00:31:10.269 "name": "BaseBdev4", 00:31:10.269 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:10.269 "is_configured": true, 00:31:10.269 "data_offset": 2048, 00:31:10.269 "data_size": 63488 00:31:10.269 } 00:31:10.269 ] 00:31:10.269 }' 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:10.269 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:10.541 [2024-07-25 00:57:32.979560] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:10.541 [2024-07-25 00:57:32.979794] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:10.541 [2024-07-25 00:57:32.979806] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:10.541 request: 00:31:10.541 { 00:31:10.541 "base_bdev": "BaseBdev1", 00:31:10.541 "raid_bdev": "raid_bdev1", 00:31:10.541 "method": "bdev_raid_add_base_bdev", 00:31:10.541 "req_id": 1 00:31:10.541 } 00:31:10.541 Got JSON-RPC error response 00:31:10.541 response: 00:31:10.541 { 00:31:10.541 "code": -22, 00:31:10.541 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:10.541 } 00:31:10.541 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:31:10.541 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:10.541 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:10.541 00:57:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:10.541 00:57:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:31:11.503 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:11.503 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:11.503 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:11.503 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:11.503 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:11.503 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:11.503 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:11.503 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:11.504 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:11.504 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:11.504 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.504 00:57:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.762 00:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:11.762 "name": "raid_bdev1", 00:31:11.762 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:11.762 "strip_size_kb": 0, 00:31:11.762 "state": "online", 00:31:11.762 "raid_level": "raid1", 00:31:11.762 "superblock": true, 00:31:11.762 "num_base_bdevs": 4, 00:31:11.762 "num_base_bdevs_discovered": 2, 00:31:11.762 "num_base_bdevs_operational": 2, 00:31:11.762 "base_bdevs_list": [ 00:31:11.762 { 00:31:11.762 "name": null, 00:31:11.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:11.762 "is_configured": false, 00:31:11.762 "data_offset": 2048, 00:31:11.762 "data_size": 63488 00:31:11.762 }, 00:31:11.762 { 00:31:11.762 "name": null, 00:31:11.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:11.762 "is_configured": false, 00:31:11.762 "data_offset": 2048, 00:31:11.762 "data_size": 63488 00:31:11.762 }, 00:31:11.762 { 00:31:11.762 "name": "BaseBdev3", 00:31:11.762 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:11.762 "is_configured": true, 00:31:11.763 "data_offset": 2048, 00:31:11.763 "data_size": 63488 00:31:11.763 }, 00:31:11.763 { 00:31:11.763 "name": "BaseBdev4", 00:31:11.763 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:11.763 "is_configured": true, 00:31:11.763 "data_offset": 2048, 00:31:11.763 "data_size": 63488 00:31:11.763 } 00:31:11.763 ] 00:31:11.763 }' 00:31:11.763 00:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:11.763 00:57:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:12.331 00:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:12.331 00:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:12.331 00:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:12.331 00:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:12.331 00:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:12.331 00:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.331 00:57:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:12.590 "name": "raid_bdev1", 00:31:12.590 "uuid": "e2535585-8da0-485e-992f-e8d8fdf03450", 00:31:12.590 "strip_size_kb": 0, 00:31:12.590 "state": "online", 00:31:12.590 "raid_level": "raid1", 00:31:12.590 "superblock": true, 00:31:12.590 "num_base_bdevs": 4, 00:31:12.590 "num_base_bdevs_discovered": 2, 00:31:12.590 "num_base_bdevs_operational": 2, 00:31:12.590 "base_bdevs_list": [ 00:31:12.590 { 00:31:12.590 "name": null, 00:31:12.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:12.590 "is_configured": false, 00:31:12.590 "data_offset": 2048, 00:31:12.590 "data_size": 63488 00:31:12.590 }, 00:31:12.590 { 00:31:12.590 "name": null, 00:31:12.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:12.590 "is_configured": false, 00:31:12.590 "data_offset": 2048, 00:31:12.590 "data_size": 63488 00:31:12.590 }, 00:31:12.590 { 00:31:12.590 "name": "BaseBdev3", 00:31:12.590 "uuid": "fd560381-eb75-5212-bda5-5d92763f5dbd", 00:31:12.590 "is_configured": true, 00:31:12.590 "data_offset": 2048, 00:31:12.590 "data_size": 63488 00:31:12.590 }, 00:31:12.590 { 00:31:12.590 "name": "BaseBdev4", 00:31:12.590 "uuid": "b4a7d347-1b99-507f-8926-8b71fab1b0eb", 00:31:12.590 "is_configured": true, 00:31:12.590 "data_offset": 2048, 00:31:12.590 "data_size": 63488 00:31:12.590 } 00:31:12.590 ] 00:31:12.590 }' 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 147963 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 147963 ']' 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 147963 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 147963 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:12.590 00:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:12.591 00:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 147963' 00:31:12.591 killing process with pid 147963 00:31:12.591 00:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 147963 00:31:12.591 Received shutdown signal, test time was about 60.000000 seconds 00:31:12.591 00:31:12.591 Latency(us) 00:31:12.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.591 =================================================================================================================== 00:31:12.591 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:12.591 [2024-07-25 00:57:35.151524] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:12.591 00:57:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 147963 00:31:12.591 [2024-07-25 00:57:35.151667] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:12.591 [2024-07-25 00:57:35.151746] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:12.591 [2024-07-25 00:57:35.151756] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:31:13.159 [2024-07-25 00:57:35.668315] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:31:14.537 00:31:14.537 real 0m36.521s 00:31:14.537 user 0m53.197s 00:31:14.537 sys 0m5.447s 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.537 ************************************ 00:31:14.537 END TEST raid_rebuild_test_sb 00:31:14.537 ************************************ 00:31:14.537 00:57:36 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:31:14.537 00:57:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:14.537 00:57:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:14.537 00:57:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:14.537 ************************************ 00:31:14.537 START TEST raid_rebuild_test_io 00:31:14.537 ************************************ 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=148980 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 148980 /var/tmp/spdk-raid.sock 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 148980 ']' 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:14.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:14.537 00:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:14.537 [2024-07-25 00:57:37.040850] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:31:14.537 [2024-07-25 00:57:37.041038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148980 ] 00:31:14.537 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:14.537 Zero copy mechanism will not be used. 00:31:14.796 [2024-07-25 00:57:37.197131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.796 [2024-07-25 00:57:37.386022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.055 [2024-07-25 00:57:37.570416] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:15.314 00:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:15.314 00:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:31:15.314 00:57:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:15.314 00:57:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:15.578 BaseBdev1_malloc 00:31:15.838 00:57:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:15.838 [2024-07-25 00:57:38.391220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:15.838 [2024-07-25 00:57:38.391324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:15.838 [2024-07-25 00:57:38.391375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:31:15.838 [2024-07-25 00:57:38.391394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:15.838 [2024-07-25 00:57:38.393716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:15.838 [2024-07-25 00:57:38.393768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:15.838 BaseBdev1 00:31:15.838 00:57:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:15.838 00:57:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:16.097 BaseBdev2_malloc 00:31:16.097 00:57:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:16.356 [2024-07-25 00:57:38.864105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:16.356 [2024-07-25 00:57:38.864203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:16.356 [2024-07-25 00:57:38.864254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:31:16.356 [2024-07-25 00:57:38.864273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:16.356 [2024-07-25 00:57:38.866512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:16.356 [2024-07-25 00:57:38.866557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:16.356 BaseBdev2 00:31:16.356 00:57:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:16.356 00:57:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:16.616 BaseBdev3_malloc 00:31:16.616 00:57:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:16.616 [2024-07-25 00:57:39.246084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:16.616 [2024-07-25 00:57:39.246184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:16.616 [2024-07-25 00:57:39.246216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:16.616 [2024-07-25 00:57:39.246240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:16.616 [2024-07-25 00:57:39.248416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:16.616 [2024-07-25 00:57:39.248465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:16.616 BaseBdev3 00:31:16.616 00:57:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:16.616 00:57:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:31:16.875 BaseBdev4_malloc 00:31:16.875 00:57:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:31:17.134 [2024-07-25 00:57:39.685573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:31:17.134 [2024-07-25 00:57:39.685683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:17.134 [2024-07-25 00:57:39.685718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:17.134 [2024-07-25 00:57:39.685742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:17.134 [2024-07-25 00:57:39.687975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:17.134 [2024-07-25 00:57:39.688023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:31:17.134 BaseBdev4 00:31:17.134 00:57:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:17.393 spare_malloc 00:31:17.393 00:57:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:17.652 spare_delay 00:31:17.652 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:17.652 [2024-07-25 00:57:40.239137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:17.652 [2024-07-25 00:57:40.239222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:17.652 [2024-07-25 00:57:40.239252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:17.652 [2024-07-25 00:57:40.239281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:17.652 [2024-07-25 00:57:40.241552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:17.652 [2024-07-25 00:57:40.241604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:17.652 spare 00:31:17.652 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:31:17.911 [2024-07-25 00:57:40.403203] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:17.911 [2024-07-25 00:57:40.404969] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:17.911 [2024-07-25 00:57:40.405038] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:17.911 [2024-07-25 00:57:40.405081] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:17.911 [2024-07-25 00:57:40.405167] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:31:17.911 [2024-07-25 00:57:40.405175] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:17.911 [2024-07-25 00:57:40.405295] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:17.911 [2024-07-25 00:57:40.405594] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:31:17.911 [2024-07-25 00:57:40.405604] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:31:17.911 [2024-07-25 00:57:40.405739] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.911 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.171 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:18.171 "name": "raid_bdev1", 00:31:18.171 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:18.171 "strip_size_kb": 0, 00:31:18.171 "state": "online", 00:31:18.171 "raid_level": "raid1", 00:31:18.171 "superblock": false, 00:31:18.171 "num_base_bdevs": 4, 00:31:18.171 "num_base_bdevs_discovered": 4, 00:31:18.171 "num_base_bdevs_operational": 4, 00:31:18.171 "base_bdevs_list": [ 00:31:18.171 { 00:31:18.171 "name": "BaseBdev1", 00:31:18.171 "uuid": "2c8221d3-ccce-5d0a-b8e8-08ea0f417687", 00:31:18.171 "is_configured": true, 00:31:18.171 "data_offset": 0, 00:31:18.171 "data_size": 65536 00:31:18.171 }, 00:31:18.171 { 00:31:18.171 "name": "BaseBdev2", 00:31:18.171 "uuid": "ad35bcb9-65bb-514a-931e-fc521736da1c", 00:31:18.171 "is_configured": true, 00:31:18.171 "data_offset": 0, 00:31:18.171 "data_size": 65536 00:31:18.171 }, 00:31:18.171 { 00:31:18.171 "name": "BaseBdev3", 00:31:18.171 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:18.171 "is_configured": true, 00:31:18.171 "data_offset": 0, 00:31:18.171 "data_size": 65536 00:31:18.171 }, 00:31:18.171 { 00:31:18.171 "name": "BaseBdev4", 00:31:18.171 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:18.171 "is_configured": true, 00:31:18.171 "data_offset": 0, 00:31:18.171 "data_size": 65536 00:31:18.171 } 00:31:18.171 ] 00:31:18.171 }' 00:31:18.171 00:57:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:18.171 00:57:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:18.739 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:18.739 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:18.739 [2024-07-25 00:57:41.379595] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:18.998 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:31:18.998 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:18.998 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.257 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:31:19.257 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:31:19.257 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:31:19.257 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:19.257 [2024-07-25 00:57:41.786548] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:31:19.257 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:19.257 Zero copy mechanism will not be used. 00:31:19.257 Running I/O for 60 seconds... 00:31:19.531 [2024-07-25 00:57:41.912821] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:19.531 [2024-07-25 00:57:41.918519] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.531 00:57:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.817 00:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:19.817 "name": "raid_bdev1", 00:31:19.817 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:19.817 "strip_size_kb": 0, 00:31:19.817 "state": "online", 00:31:19.817 "raid_level": "raid1", 00:31:19.817 "superblock": false, 00:31:19.817 "num_base_bdevs": 4, 00:31:19.817 "num_base_bdevs_discovered": 3, 00:31:19.817 "num_base_bdevs_operational": 3, 00:31:19.817 "base_bdevs_list": [ 00:31:19.817 { 00:31:19.817 "name": null, 00:31:19.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.817 "is_configured": false, 00:31:19.817 "data_offset": 0, 00:31:19.817 "data_size": 65536 00:31:19.817 }, 00:31:19.817 { 00:31:19.817 "name": "BaseBdev2", 00:31:19.817 "uuid": "ad35bcb9-65bb-514a-931e-fc521736da1c", 00:31:19.817 "is_configured": true, 00:31:19.817 "data_offset": 0, 00:31:19.817 "data_size": 65536 00:31:19.817 }, 00:31:19.817 { 00:31:19.817 "name": "BaseBdev3", 00:31:19.817 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:19.817 "is_configured": true, 00:31:19.817 "data_offset": 0, 00:31:19.817 "data_size": 65536 00:31:19.817 }, 00:31:19.817 { 00:31:19.817 "name": "BaseBdev4", 00:31:19.817 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:19.817 "is_configured": true, 00:31:19.817 "data_offset": 0, 00:31:19.817 "data_size": 65536 00:31:19.817 } 00:31:19.817 ] 00:31:19.817 }' 00:31:19.817 00:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:19.817 00:57:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:20.386 00:57:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:20.386 [2024-07-25 00:57:43.009404] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:20.645 [2024-07-25 00:57:43.064040] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:31:20.645 [2024-07-25 00:57:43.066261] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:20.645 00:57:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:20.645 [2024-07-25 00:57:43.176016] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:20.645 [2024-07-25 00:57:43.176733] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:20.908 [2024-07-25 00:57:43.379683] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:20.908 [2024-07-25 00:57:43.380039] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:21.166 [2024-07-25 00:57:43.611239] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:21.425 [2024-07-25 00:57:43.828955] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:21.425 [2024-07-25 00:57:43.829780] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:21.425 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:21.425 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:21.425 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:21.425 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:21.425 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:21.684 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.684 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:21.684 [2024-07-25 00:57:44.166716] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:21.684 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:21.684 "name": "raid_bdev1", 00:31:21.684 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:21.684 "strip_size_kb": 0, 00:31:21.684 "state": "online", 00:31:21.684 "raid_level": "raid1", 00:31:21.684 "superblock": false, 00:31:21.684 "num_base_bdevs": 4, 00:31:21.684 "num_base_bdevs_discovered": 4, 00:31:21.684 "num_base_bdevs_operational": 4, 00:31:21.684 "process": { 00:31:21.684 "type": "rebuild", 00:31:21.684 "target": "spare", 00:31:21.684 "progress": { 00:31:21.684 "blocks": 14336, 00:31:21.684 "percent": 21 00:31:21.684 } 00:31:21.684 }, 00:31:21.684 "base_bdevs_list": [ 00:31:21.684 { 00:31:21.684 "name": "spare", 00:31:21.684 "uuid": "09b10cfe-4c3e-59b3-8544-61a31b0aee32", 00:31:21.684 "is_configured": true, 00:31:21.684 "data_offset": 0, 00:31:21.684 "data_size": 65536 00:31:21.684 }, 00:31:21.685 { 00:31:21.685 "name": "BaseBdev2", 00:31:21.685 "uuid": "ad35bcb9-65bb-514a-931e-fc521736da1c", 00:31:21.685 "is_configured": true, 00:31:21.685 "data_offset": 0, 00:31:21.685 "data_size": 65536 00:31:21.685 }, 00:31:21.685 { 00:31:21.685 "name": "BaseBdev3", 00:31:21.685 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:21.685 "is_configured": true, 00:31:21.685 "data_offset": 0, 00:31:21.685 "data_size": 65536 00:31:21.685 }, 00:31:21.685 { 00:31:21.685 "name": "BaseBdev4", 00:31:21.685 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:21.685 "is_configured": true, 00:31:21.685 "data_offset": 0, 00:31:21.685 "data_size": 65536 00:31:21.685 } 00:31:21.685 ] 00:31:21.685 }' 00:31:21.685 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:21.943 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:21.943 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:21.943 [2024-07-25 00:57:44.390077] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:21.943 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:21.943 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:22.202 [2024-07-25 00:57:44.627568] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:22.202 [2024-07-25 00:57:44.734068] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:22.202 [2024-07-25 00:57:44.741124] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:22.202 [2024-07-25 00:57:44.757963] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.202 [2024-07-25 00:57:44.758148] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:22.202 [2024-07-25 00:57:44.758191] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:22.202 [2024-07-25 00:57:44.785410] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.202 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.461 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:22.461 "name": "raid_bdev1", 00:31:22.461 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:22.461 "strip_size_kb": 0, 00:31:22.461 "state": "online", 00:31:22.461 "raid_level": "raid1", 00:31:22.461 "superblock": false, 00:31:22.461 "num_base_bdevs": 4, 00:31:22.461 "num_base_bdevs_discovered": 3, 00:31:22.461 "num_base_bdevs_operational": 3, 00:31:22.461 "base_bdevs_list": [ 00:31:22.461 { 00:31:22.461 "name": null, 00:31:22.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.461 "is_configured": false, 00:31:22.461 "data_offset": 0, 00:31:22.461 "data_size": 65536 00:31:22.461 }, 00:31:22.461 { 00:31:22.461 "name": "BaseBdev2", 00:31:22.461 "uuid": "ad35bcb9-65bb-514a-931e-fc521736da1c", 00:31:22.461 "is_configured": true, 00:31:22.461 "data_offset": 0, 00:31:22.461 "data_size": 65536 00:31:22.461 }, 00:31:22.461 { 00:31:22.461 "name": "BaseBdev3", 00:31:22.461 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:22.461 "is_configured": true, 00:31:22.461 "data_offset": 0, 00:31:22.461 "data_size": 65536 00:31:22.461 }, 00:31:22.461 { 00:31:22.461 "name": "BaseBdev4", 00:31:22.461 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:22.461 "is_configured": true, 00:31:22.461 "data_offset": 0, 00:31:22.461 "data_size": 65536 00:31:22.461 } 00:31:22.461 ] 00:31:22.461 }' 00:31:22.461 00:57:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:22.461 00:57:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:23.030 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:23.030 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:23.030 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:23.030 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:23.030 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:23.030 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.030 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.290 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:23.290 "name": "raid_bdev1", 00:31:23.290 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:23.290 "strip_size_kb": 0, 00:31:23.290 "state": "online", 00:31:23.290 "raid_level": "raid1", 00:31:23.290 "superblock": false, 00:31:23.290 "num_base_bdevs": 4, 00:31:23.290 "num_base_bdevs_discovered": 3, 00:31:23.290 "num_base_bdevs_operational": 3, 00:31:23.290 "base_bdevs_list": [ 00:31:23.290 { 00:31:23.290 "name": null, 00:31:23.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.290 "is_configured": false, 00:31:23.290 "data_offset": 0, 00:31:23.290 "data_size": 65536 00:31:23.290 }, 00:31:23.290 { 00:31:23.290 "name": "BaseBdev2", 00:31:23.290 "uuid": "ad35bcb9-65bb-514a-931e-fc521736da1c", 00:31:23.290 "is_configured": true, 00:31:23.290 "data_offset": 0, 00:31:23.290 "data_size": 65536 00:31:23.290 }, 00:31:23.290 { 00:31:23.290 "name": "BaseBdev3", 00:31:23.290 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:23.290 "is_configured": true, 00:31:23.290 "data_offset": 0, 00:31:23.290 "data_size": 65536 00:31:23.290 }, 00:31:23.290 { 00:31:23.290 "name": "BaseBdev4", 00:31:23.290 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:23.290 "is_configured": true, 00:31:23.290 "data_offset": 0, 00:31:23.290 "data_size": 65536 00:31:23.290 } 00:31:23.290 ] 00:31:23.290 }' 00:31:23.290 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:23.290 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:23.290 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:23.550 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:23.550 00:57:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:23.550 [2024-07-25 00:57:46.121957] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:23.550 00:57:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:23.550 [2024-07-25 00:57:46.181849] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:31:23.550 [2024-07-25 00:57:46.183899] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:23.810 [2024-07-25 00:57:46.320782] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:23.810 [2024-07-25 00:57:46.321431] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:24.069 [2024-07-25 00:57:46.540017] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:24.069 [2024-07-25 00:57:46.540844] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:24.329 [2024-07-25 00:57:46.873898] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:24.329 [2024-07-25 00:57:46.874520] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:24.589 [2024-07-25 00:57:47.085705] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:24.589 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:24.589 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:24.589 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:24.589 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:24.589 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:24.589 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.589 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.848 [2024-07-25 00:57:47.320449] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:24.848 [2024-07-25 00:57:47.321156] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:24.848 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:24.848 "name": "raid_bdev1", 00:31:24.848 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:24.848 "strip_size_kb": 0, 00:31:24.848 "state": "online", 00:31:24.848 "raid_level": "raid1", 00:31:24.848 "superblock": false, 00:31:24.848 "num_base_bdevs": 4, 00:31:24.848 "num_base_bdevs_discovered": 4, 00:31:24.848 "num_base_bdevs_operational": 4, 00:31:24.848 "process": { 00:31:24.848 "type": "rebuild", 00:31:24.848 "target": "spare", 00:31:24.848 "progress": { 00:31:24.848 "blocks": 14336, 00:31:24.848 "percent": 21 00:31:24.848 } 00:31:24.848 }, 00:31:24.848 "base_bdevs_list": [ 00:31:24.848 { 00:31:24.848 "name": "spare", 00:31:24.848 "uuid": "09b10cfe-4c3e-59b3-8544-61a31b0aee32", 00:31:24.848 "is_configured": true, 00:31:24.848 "data_offset": 0, 00:31:24.848 "data_size": 65536 00:31:24.848 }, 00:31:24.848 { 00:31:24.848 "name": "BaseBdev2", 00:31:24.848 "uuid": "ad35bcb9-65bb-514a-931e-fc521736da1c", 00:31:24.848 "is_configured": true, 00:31:24.848 "data_offset": 0, 00:31:24.848 "data_size": 65536 00:31:24.848 }, 00:31:24.848 { 00:31:24.848 "name": "BaseBdev3", 00:31:24.848 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:24.848 "is_configured": true, 00:31:24.848 "data_offset": 0, 00:31:24.848 "data_size": 65536 00:31:24.848 }, 00:31:24.848 { 00:31:24.848 "name": "BaseBdev4", 00:31:24.848 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:24.848 "is_configured": true, 00:31:24.848 "data_offset": 0, 00:31:24.848 "data_size": 65536 00:31:24.848 } 00:31:24.848 ] 00:31:24.848 }' 00:31:24.848 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:24.848 [2024-07-25 00:57:47.424563] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:24.848 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:24.848 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:25.108 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:25.108 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:31:25.108 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:31:25.108 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:25.108 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:31:25.108 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:25.108 [2024-07-25 00:57:47.649104] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:25.367 [2024-07-25 00:57:47.764999] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:25.367 [2024-07-25 00:57:47.869416] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:31:25.367 [2024-07-25 00:57:47.882695] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:31:25.367 [2024-07-25 00:57:47.882819] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:31:25.367 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:31:25.367 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:31:25.367 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:25.367 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:25.367 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:25.367 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:25.367 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:25.367 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.367 00:57:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.625 [2024-07-25 00:57:48.134075] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:31:25.625 [2024-07-25 00:57:48.135151] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:31:25.625 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:25.625 "name": "raid_bdev1", 00:31:25.625 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:25.625 "strip_size_kb": 0, 00:31:25.625 "state": "online", 00:31:25.625 "raid_level": "raid1", 00:31:25.625 "superblock": false, 00:31:25.625 "num_base_bdevs": 4, 00:31:25.625 "num_base_bdevs_discovered": 3, 00:31:25.625 "num_base_bdevs_operational": 3, 00:31:25.625 "process": { 00:31:25.625 "type": "rebuild", 00:31:25.625 "target": "spare", 00:31:25.625 "progress": { 00:31:25.625 "blocks": 26624, 00:31:25.625 "percent": 40 00:31:25.625 } 00:31:25.625 }, 00:31:25.625 "base_bdevs_list": [ 00:31:25.625 { 00:31:25.625 "name": "spare", 00:31:25.625 "uuid": "09b10cfe-4c3e-59b3-8544-61a31b0aee32", 00:31:25.625 "is_configured": true, 00:31:25.625 "data_offset": 0, 00:31:25.625 "data_size": 65536 00:31:25.625 }, 00:31:25.625 { 00:31:25.626 "name": null, 00:31:25.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.626 "is_configured": false, 00:31:25.626 "data_offset": 0, 00:31:25.626 "data_size": 65536 00:31:25.626 }, 00:31:25.626 { 00:31:25.626 "name": "BaseBdev3", 00:31:25.626 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:25.626 "is_configured": true, 00:31:25.626 "data_offset": 0, 00:31:25.626 "data_size": 65536 00:31:25.626 }, 00:31:25.626 { 00:31:25.626 "name": "BaseBdev4", 00:31:25.626 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:25.626 "is_configured": true, 00:31:25.626 "data_offset": 0, 00:31:25.626 "data_size": 65536 00:31:25.626 } 00:31:25.626 ] 00:31:25.626 }' 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=945 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.626 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.884 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:25.884 "name": "raid_bdev1", 00:31:25.884 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:25.884 "strip_size_kb": 0, 00:31:25.884 "state": "online", 00:31:25.884 "raid_level": "raid1", 00:31:25.884 "superblock": false, 00:31:25.884 "num_base_bdevs": 4, 00:31:25.884 "num_base_bdevs_discovered": 3, 00:31:25.884 "num_base_bdevs_operational": 3, 00:31:25.884 "process": { 00:31:25.884 "type": "rebuild", 00:31:25.884 "target": "spare", 00:31:25.884 "progress": { 00:31:25.884 "blocks": 28672, 00:31:25.884 "percent": 43 00:31:25.884 } 00:31:25.884 }, 00:31:25.884 "base_bdevs_list": [ 00:31:25.884 { 00:31:25.884 "name": "spare", 00:31:25.884 "uuid": "09b10cfe-4c3e-59b3-8544-61a31b0aee32", 00:31:25.884 "is_configured": true, 00:31:25.884 "data_offset": 0, 00:31:25.884 "data_size": 65536 00:31:25.884 }, 00:31:25.884 { 00:31:25.884 "name": null, 00:31:25.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.884 "is_configured": false, 00:31:25.884 "data_offset": 0, 00:31:25.884 "data_size": 65536 00:31:25.884 }, 00:31:25.884 { 00:31:25.884 "name": "BaseBdev3", 00:31:25.884 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:25.884 "is_configured": true, 00:31:25.884 "data_offset": 0, 00:31:25.884 "data_size": 65536 00:31:25.884 }, 00:31:25.884 { 00:31:25.884 "name": "BaseBdev4", 00:31:25.884 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:25.884 "is_configured": true, 00:31:25.884 "data_offset": 0, 00:31:25.884 "data_size": 65536 00:31:25.884 } 00:31:25.884 ] 00:31:25.884 }' 00:31:25.884 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:25.884 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:25.884 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:25.884 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:25.884 00:57:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:26.142 [2024-07-25 00:57:48.704153] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:31:26.142 [2024-07-25 00:57:48.704747] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:31:26.709 [2024-07-25 00:57:49.326395] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:31:26.968 [2024-07-25 00:57:49.428246] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:31:26.968 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:26.968 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:26.968 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:26.968 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:26.968 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:26.968 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:26.969 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.969 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.228 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:27.228 "name": "raid_bdev1", 00:31:27.228 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:27.228 "strip_size_kb": 0, 00:31:27.228 "state": "online", 00:31:27.228 "raid_level": "raid1", 00:31:27.228 "superblock": false, 00:31:27.228 "num_base_bdevs": 4, 00:31:27.228 "num_base_bdevs_discovered": 3, 00:31:27.228 "num_base_bdevs_operational": 3, 00:31:27.228 "process": { 00:31:27.228 "type": "rebuild", 00:31:27.228 "target": "spare", 00:31:27.228 "progress": { 00:31:27.228 "blocks": 53248, 00:31:27.228 "percent": 81 00:31:27.228 } 00:31:27.228 }, 00:31:27.228 "base_bdevs_list": [ 00:31:27.228 { 00:31:27.228 "name": "spare", 00:31:27.228 "uuid": "09b10cfe-4c3e-59b3-8544-61a31b0aee32", 00:31:27.228 "is_configured": true, 00:31:27.228 "data_offset": 0, 00:31:27.228 "data_size": 65536 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "name": null, 00:31:27.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.228 "is_configured": false, 00:31:27.228 "data_offset": 0, 00:31:27.228 "data_size": 65536 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "name": "BaseBdev3", 00:31:27.228 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:27.228 "is_configured": true, 00:31:27.228 "data_offset": 0, 00:31:27.228 "data_size": 65536 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "name": "BaseBdev4", 00:31:27.228 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:27.228 "is_configured": true, 00:31:27.228 "data_offset": 0, 00:31:27.228 "data_size": 65536 00:31:27.228 } 00:31:27.228 ] 00:31:27.228 }' 00:31:27.228 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:27.228 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:27.228 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:27.228 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:27.228 00:57:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:27.487 [2024-07-25 00:57:49.989895] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:31:28.054 [2024-07-25 00:57:50.428348] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:28.054 [2024-07-25 00:57:50.528419] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:28.054 [2024-07-25 00:57:50.530906] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:28.313 00:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:28.313 00:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:28.313 00:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:28.313 00:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:28.313 00:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:28.313 00:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:28.313 00:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.313 00:57:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.574 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:28.574 "name": "raid_bdev1", 00:31:28.574 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:28.574 "strip_size_kb": 0, 00:31:28.574 "state": "online", 00:31:28.574 "raid_level": "raid1", 00:31:28.574 "superblock": false, 00:31:28.574 "num_base_bdevs": 4, 00:31:28.574 "num_base_bdevs_discovered": 3, 00:31:28.574 "num_base_bdevs_operational": 3, 00:31:28.574 "base_bdevs_list": [ 00:31:28.574 { 00:31:28.574 "name": "spare", 00:31:28.574 "uuid": "09b10cfe-4c3e-59b3-8544-61a31b0aee32", 00:31:28.574 "is_configured": true, 00:31:28.574 "data_offset": 0, 00:31:28.574 "data_size": 65536 00:31:28.574 }, 00:31:28.574 { 00:31:28.574 "name": null, 00:31:28.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.574 "is_configured": false, 00:31:28.574 "data_offset": 0, 00:31:28.574 "data_size": 65536 00:31:28.574 }, 00:31:28.574 { 00:31:28.574 "name": "BaseBdev3", 00:31:28.574 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:28.574 "is_configured": true, 00:31:28.574 "data_offset": 0, 00:31:28.574 "data_size": 65536 00:31:28.574 }, 00:31:28.574 { 00:31:28.574 "name": "BaseBdev4", 00:31:28.574 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:28.574 "is_configured": true, 00:31:28.574 "data_offset": 0, 00:31:28.574 "data_size": 65536 00:31:28.574 } 00:31:28.574 ] 00:31:28.574 }' 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.575 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:28.834 "name": "raid_bdev1", 00:31:28.834 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:28.834 "strip_size_kb": 0, 00:31:28.834 "state": "online", 00:31:28.834 "raid_level": "raid1", 00:31:28.834 "superblock": false, 00:31:28.834 "num_base_bdevs": 4, 00:31:28.834 "num_base_bdevs_discovered": 3, 00:31:28.834 "num_base_bdevs_operational": 3, 00:31:28.834 "base_bdevs_list": [ 00:31:28.834 { 00:31:28.834 "name": "spare", 00:31:28.834 "uuid": "09b10cfe-4c3e-59b3-8544-61a31b0aee32", 00:31:28.834 "is_configured": true, 00:31:28.834 "data_offset": 0, 00:31:28.834 "data_size": 65536 00:31:28.834 }, 00:31:28.834 { 00:31:28.834 "name": null, 00:31:28.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.834 "is_configured": false, 00:31:28.834 "data_offset": 0, 00:31:28.834 "data_size": 65536 00:31:28.834 }, 00:31:28.834 { 00:31:28.834 "name": "BaseBdev3", 00:31:28.834 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:28.834 "is_configured": true, 00:31:28.834 "data_offset": 0, 00:31:28.834 "data_size": 65536 00:31:28.834 }, 00:31:28.834 { 00:31:28.834 "name": "BaseBdev4", 00:31:28.834 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:28.834 "is_configured": true, 00:31:28.834 "data_offset": 0, 00:31:28.834 "data_size": 65536 00:31:28.834 } 00:31:28.834 ] 00:31:28.834 }' 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.834 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.093 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:29.093 "name": "raid_bdev1", 00:31:29.093 "uuid": "996495c5-1651-4573-8999-8738209c97ef", 00:31:29.093 "strip_size_kb": 0, 00:31:29.093 "state": "online", 00:31:29.093 "raid_level": "raid1", 00:31:29.093 "superblock": false, 00:31:29.093 "num_base_bdevs": 4, 00:31:29.093 "num_base_bdevs_discovered": 3, 00:31:29.093 "num_base_bdevs_operational": 3, 00:31:29.093 "base_bdevs_list": [ 00:31:29.093 { 00:31:29.093 "name": "spare", 00:31:29.093 "uuid": "09b10cfe-4c3e-59b3-8544-61a31b0aee32", 00:31:29.093 "is_configured": true, 00:31:29.093 "data_offset": 0, 00:31:29.093 "data_size": 65536 00:31:29.093 }, 00:31:29.093 { 00:31:29.093 "name": null, 00:31:29.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.093 "is_configured": false, 00:31:29.093 "data_offset": 0, 00:31:29.093 "data_size": 65536 00:31:29.093 }, 00:31:29.093 { 00:31:29.093 "name": "BaseBdev3", 00:31:29.093 "uuid": "160d3595-bf15-5e3d-98b1-21f5eeaf37cb", 00:31:29.093 "is_configured": true, 00:31:29.093 "data_offset": 0, 00:31:29.093 "data_size": 65536 00:31:29.093 }, 00:31:29.093 { 00:31:29.093 "name": "BaseBdev4", 00:31:29.093 "uuid": "9518cd93-9ee0-567b-b285-0374e4608eef", 00:31:29.093 "is_configured": true, 00:31:29.093 "data_offset": 0, 00:31:29.093 "data_size": 65536 00:31:29.093 } 00:31:29.093 ] 00:31:29.093 }' 00:31:29.093 00:57:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:29.093 00:57:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:30.153 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:30.153 [2024-07-25 00:57:52.571604] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:30.153 [2024-07-25 00:57:52.571826] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:30.153 00:31:30.153 Latency(us) 00:31:30.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.153 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:30.153 raid_bdev1 : 10.80 104.05 312.15 0.00 0.00 13123.98 298.42 110350.14 00:31:30.153 =================================================================================================================== 00:31:30.153 Total : 104.05 312.15 0.00 0.00 13123.98 298.42 110350.14 00:31:30.153 [2024-07-25 00:57:52.611845] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:30.153 [2024-07-25 00:57:52.612011] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:30.153 [2024-07-25 00:57:52.612134] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:30.153 0 00:31:30.154 [2024-07-25 00:57:52.612232] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:31:30.154 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.154 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:30.412 00:57:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:30.671 /dev/nbd0 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:30.671 1+0 records in 00:31:30.671 1+0 records out 00:31:30.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538984 s, 7.6 MB/s 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:30.671 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:31:30.929 /dev/nbd1 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:30.929 1+0 records in 00:31:30.929 1+0 records out 00:31:30.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619903 s, 6.6 MB/s 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:30.929 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:31.186 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:31.186 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:31.186 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:31.186 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:31.186 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:31:31.186 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:31.187 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:31.445 00:57:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:31:31.703 /dev/nbd1 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:31.703 1+0 records in 00:31:31.703 1+0 records out 00:31:31.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636636 s, 6.4 MB/s 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:31.703 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:31.704 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:31:31.704 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:31.704 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:31.962 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 148980 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 148980 ']' 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 148980 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148980 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148980' 00:31:32.221 killing process with pid 148980 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 148980 00:31:32.221 Received shutdown signal, test time was about 13.045735 seconds 00:31:32.221 00:31:32.221 Latency(us) 00:31:32.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.221 =================================================================================================================== 00:31:32.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:32.221 00:57:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 148980 00:31:32.221 [2024-07-25 00:57:54.834817] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:32.787 [2024-07-25 00:57:55.214854] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:34.164 ************************************ 00:31:34.164 END TEST raid_rebuild_test_io 00:31:34.164 ************************************ 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:31:34.164 00:31:34.164 real 0m19.500s 00:31:34.164 user 0m29.486s 00:31:34.164 sys 0m2.987s 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:34.164 00:57:56 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:31:34.164 00:57:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:34.164 00:57:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:34.164 00:57:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:34.164 ************************************ 00:31:34.164 START TEST raid_rebuild_test_sb_io 00:31:34.164 ************************************ 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:34.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=149505 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 149505 /var/tmp/spdk-raid.sock 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 149505 ']' 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:34.164 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:34.165 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:34.165 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:34.165 00:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:34.165 [2024-07-25 00:57:56.639451] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:31:34.165 [2024-07-25 00:57:56.639833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149505 ] 00:31:34.165 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:34.165 Zero copy mechanism will not be used. 00:31:34.423 [2024-07-25 00:57:56.819122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.423 [2024-07-25 00:57:56.997531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.681 [2024-07-25 00:57:57.182887] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:34.940 00:57:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:34.940 00:57:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:31:34.940 00:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:34.940 00:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:35.200 BaseBdev1_malloc 00:31:35.200 00:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:35.460 [2024-07-25 00:57:58.009261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:35.460 [2024-07-25 00:57:58.009488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:35.460 [2024-07-25 00:57:58.009643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:31:35.460 [2024-07-25 00:57:58.009746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:35.460 [2024-07-25 00:57:58.012086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:35.460 [2024-07-25 00:57:58.012233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:35.460 BaseBdev1 00:31:35.460 00:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:35.460 00:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:35.718 BaseBdev2_malloc 00:31:35.718 00:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:35.976 [2024-07-25 00:57:58.418851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:35.976 [2024-07-25 00:57:58.419102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:35.976 [2024-07-25 00:57:58.419173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:31:35.976 [2024-07-25 00:57:58.419278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:35.976 [2024-07-25 00:57:58.421497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:35.976 [2024-07-25 00:57:58.421655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:35.976 BaseBdev2 00:31:35.976 00:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:35.976 00:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:36.234 BaseBdev3_malloc 00:31:36.234 00:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:36.493 [2024-07-25 00:57:58.902561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:36.493 [2024-07-25 00:57:58.902810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:36.493 [2024-07-25 00:57:58.902877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:36.493 [2024-07-25 00:57:58.902973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:36.493 [2024-07-25 00:57:58.905150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:36.493 [2024-07-25 00:57:58.905318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:36.493 BaseBdev3 00:31:36.493 00:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:36.493 00:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:31:36.493 BaseBdev4_malloc 00:31:36.493 00:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:31:36.751 [2024-07-25 00:57:59.288396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:31:36.751 [2024-07-25 00:57:59.288643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:36.751 [2024-07-25 00:57:59.288789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:36.751 [2024-07-25 00:57:59.288897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:36.751 [2024-07-25 00:57:59.291193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:36.751 [2024-07-25 00:57:59.291373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:31:36.751 BaseBdev4 00:31:36.751 00:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:37.010 spare_malloc 00:31:37.010 00:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:37.269 spare_delay 00:31:37.269 00:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:37.528 [2024-07-25 00:57:59.953368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:37.528 [2024-07-25 00:57:59.953637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:37.528 [2024-07-25 00:57:59.953702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:37.528 [2024-07-25 00:57:59.953811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:37.528 [2024-07-25 00:57:59.956067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:37.528 [2024-07-25 00:57:59.956228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:37.528 spare 00:31:37.528 00:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:31:37.786 [2024-07-25 00:58:00.197464] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:37.786 [2024-07-25 00:58:00.199515] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:37.786 [2024-07-25 00:58:00.199706] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:37.786 [2024-07-25 00:58:00.199784] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:37.786 [2024-07-25 00:58:00.200065] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:31:37.786 [2024-07-25 00:58:00.200168] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:37.786 [2024-07-25 00:58:00.200311] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:37.786 [2024-07-25 00:58:00.200768] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:31:37.786 [2024-07-25 00:58:00.200874] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:31:37.786 [2024-07-25 00:58:00.201112] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:37.786 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:31:37.786 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:37.786 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:37.787 "name": "raid_bdev1", 00:31:37.787 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:37.787 "strip_size_kb": 0, 00:31:37.787 "state": "online", 00:31:37.787 "raid_level": "raid1", 00:31:37.787 "superblock": true, 00:31:37.787 "num_base_bdevs": 4, 00:31:37.787 "num_base_bdevs_discovered": 4, 00:31:37.787 "num_base_bdevs_operational": 4, 00:31:37.787 "base_bdevs_list": [ 00:31:37.787 { 00:31:37.787 "name": "BaseBdev1", 00:31:37.787 "uuid": "03a128f5-e0a2-56a1-a372-cbf902d0693c", 00:31:37.787 "is_configured": true, 00:31:37.787 "data_offset": 2048, 00:31:37.787 "data_size": 63488 00:31:37.787 }, 00:31:37.787 { 00:31:37.787 "name": "BaseBdev2", 00:31:37.787 "uuid": "39f0f380-45b0-5f43-b668-72d7b9098cb0", 00:31:37.787 "is_configured": true, 00:31:37.787 "data_offset": 2048, 00:31:37.787 "data_size": 63488 00:31:37.787 }, 00:31:37.787 { 00:31:37.787 "name": "BaseBdev3", 00:31:37.787 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:37.787 "is_configured": true, 00:31:37.787 "data_offset": 2048, 00:31:37.787 "data_size": 63488 00:31:37.787 }, 00:31:37.787 { 00:31:37.787 "name": "BaseBdev4", 00:31:37.787 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:37.787 "is_configured": true, 00:31:37.787 "data_offset": 2048, 00:31:37.787 "data_size": 63488 00:31:37.787 } 00:31:37.787 ] 00:31:37.787 }' 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:37.787 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:38.354 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:38.354 00:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:38.613 [2024-07-25 00:58:01.081818] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:38.613 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:31:38.613 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:38.613 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:31:38.873 [2024-07-25 00:58:01.380514] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:31:38.873 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:38.873 Zero copy mechanism will not be used. 00:31:38.873 Running I/O for 60 seconds... 00:31:38.873 [2024-07-25 00:58:01.438917] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:38.873 [2024-07-25 00:58:01.449845] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.873 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.131 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:39.131 "name": "raid_bdev1", 00:31:39.131 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:39.131 "strip_size_kb": 0, 00:31:39.131 "state": "online", 00:31:39.131 "raid_level": "raid1", 00:31:39.131 "superblock": true, 00:31:39.131 "num_base_bdevs": 4, 00:31:39.131 "num_base_bdevs_discovered": 3, 00:31:39.131 "num_base_bdevs_operational": 3, 00:31:39.131 "base_bdevs_list": [ 00:31:39.131 { 00:31:39.132 "name": null, 00:31:39.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.132 "is_configured": false, 00:31:39.132 "data_offset": 2048, 00:31:39.132 "data_size": 63488 00:31:39.132 }, 00:31:39.132 { 00:31:39.132 "name": "BaseBdev2", 00:31:39.132 "uuid": "39f0f380-45b0-5f43-b668-72d7b9098cb0", 00:31:39.132 "is_configured": true, 00:31:39.132 "data_offset": 2048, 00:31:39.132 "data_size": 63488 00:31:39.132 }, 00:31:39.132 { 00:31:39.132 "name": "BaseBdev3", 00:31:39.132 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:39.132 "is_configured": true, 00:31:39.132 "data_offset": 2048, 00:31:39.132 "data_size": 63488 00:31:39.132 }, 00:31:39.132 { 00:31:39.132 "name": "BaseBdev4", 00:31:39.132 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:39.132 "is_configured": true, 00:31:39.132 "data_offset": 2048, 00:31:39.132 "data_size": 63488 00:31:39.132 } 00:31:39.132 ] 00:31:39.132 }' 00:31:39.132 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:39.132 00:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:39.699 00:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:39.958 [2024-07-25 00:58:02.584152] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:40.217 00:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:40.217 [2024-07-25 00:58:02.648281] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:31:40.217 [2024-07-25 00:58:02.650440] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:40.217 [2024-07-25 00:58:02.774420] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:40.218 [2024-07-25 00:58:02.775800] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:40.477 [2024-07-25 00:58:03.001902] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:40.477 [2024-07-25 00:58:03.002775] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:40.736 [2024-07-25 00:58:03.375671] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:40.994 [2024-07-25 00:58:03.499087] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:41.252 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:41.252 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:41.252 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:41.252 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:41.252 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:41.252 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.252 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.252 [2024-07-25 00:58:03.874272] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:41.511 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:41.511 "name": "raid_bdev1", 00:31:41.511 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:41.511 "strip_size_kb": 0, 00:31:41.511 "state": "online", 00:31:41.511 "raid_level": "raid1", 00:31:41.511 "superblock": true, 00:31:41.511 "num_base_bdevs": 4, 00:31:41.511 "num_base_bdevs_discovered": 4, 00:31:41.511 "num_base_bdevs_operational": 4, 00:31:41.511 "process": { 00:31:41.511 "type": "rebuild", 00:31:41.511 "target": "spare", 00:31:41.511 "progress": { 00:31:41.511 "blocks": 16384, 00:31:41.511 "percent": 25 00:31:41.511 } 00:31:41.511 }, 00:31:41.511 "base_bdevs_list": [ 00:31:41.511 { 00:31:41.511 "name": "spare", 00:31:41.511 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:41.511 "is_configured": true, 00:31:41.511 "data_offset": 2048, 00:31:41.511 "data_size": 63488 00:31:41.511 }, 00:31:41.511 { 00:31:41.511 "name": "BaseBdev2", 00:31:41.511 "uuid": "39f0f380-45b0-5f43-b668-72d7b9098cb0", 00:31:41.511 "is_configured": true, 00:31:41.511 "data_offset": 2048, 00:31:41.511 "data_size": 63488 00:31:41.511 }, 00:31:41.511 { 00:31:41.511 "name": "BaseBdev3", 00:31:41.511 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:41.511 "is_configured": true, 00:31:41.511 "data_offset": 2048, 00:31:41.511 "data_size": 63488 00:31:41.511 }, 00:31:41.511 { 00:31:41.511 "name": "BaseBdev4", 00:31:41.511 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:41.511 "is_configured": true, 00:31:41.511 "data_offset": 2048, 00:31:41.511 "data_size": 63488 00:31:41.511 } 00:31:41.511 ] 00:31:41.511 }' 00:31:41.511 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:41.511 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:41.511 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:41.511 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:41.511 00:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:41.511 [2024-07-25 00:58:04.128745] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:41.511 [2024-07-25 00:58:04.130303] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:41.770 [2024-07-25 00:58:04.220625] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:41.770 [2024-07-25 00:58:04.239538] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:41.770 [2024-07-25 00:58:04.349931] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:41.770 [2024-07-25 00:58:04.354664] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:41.770 [2024-07-25 00:58:04.354817] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:41.770 [2024-07-25 00:58:04.354856] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:41.770 [2024-07-25 00:58:04.384813] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.770 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.029 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:42.029 "name": "raid_bdev1", 00:31:42.029 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:42.029 "strip_size_kb": 0, 00:31:42.029 "state": "online", 00:31:42.029 "raid_level": "raid1", 00:31:42.029 "superblock": true, 00:31:42.029 "num_base_bdevs": 4, 00:31:42.029 "num_base_bdevs_discovered": 3, 00:31:42.029 "num_base_bdevs_operational": 3, 00:31:42.029 "base_bdevs_list": [ 00:31:42.029 { 00:31:42.029 "name": null, 00:31:42.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.029 "is_configured": false, 00:31:42.029 "data_offset": 2048, 00:31:42.029 "data_size": 63488 00:31:42.029 }, 00:31:42.029 { 00:31:42.029 "name": "BaseBdev2", 00:31:42.029 "uuid": "39f0f380-45b0-5f43-b668-72d7b9098cb0", 00:31:42.029 "is_configured": true, 00:31:42.029 "data_offset": 2048, 00:31:42.029 "data_size": 63488 00:31:42.029 }, 00:31:42.029 { 00:31:42.029 "name": "BaseBdev3", 00:31:42.029 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:42.029 "is_configured": true, 00:31:42.029 "data_offset": 2048, 00:31:42.029 "data_size": 63488 00:31:42.029 }, 00:31:42.029 { 00:31:42.029 "name": "BaseBdev4", 00:31:42.029 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:42.029 "is_configured": true, 00:31:42.029 "data_offset": 2048, 00:31:42.029 "data_size": 63488 00:31:42.029 } 00:31:42.029 ] 00:31:42.029 }' 00:31:42.029 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:42.029 00:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:42.641 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:42.641 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:42.641 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:42.641 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:42.641 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:42.641 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.641 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.899 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:42.900 "name": "raid_bdev1", 00:31:42.900 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:42.900 "strip_size_kb": 0, 00:31:42.900 "state": "online", 00:31:42.900 "raid_level": "raid1", 00:31:42.900 "superblock": true, 00:31:42.900 "num_base_bdevs": 4, 00:31:42.900 "num_base_bdevs_discovered": 3, 00:31:42.900 "num_base_bdevs_operational": 3, 00:31:42.900 "base_bdevs_list": [ 00:31:42.900 { 00:31:42.900 "name": null, 00:31:42.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.900 "is_configured": false, 00:31:42.900 "data_offset": 2048, 00:31:42.900 "data_size": 63488 00:31:42.900 }, 00:31:42.900 { 00:31:42.900 "name": "BaseBdev2", 00:31:42.900 "uuid": "39f0f380-45b0-5f43-b668-72d7b9098cb0", 00:31:42.900 "is_configured": true, 00:31:42.900 "data_offset": 2048, 00:31:42.900 "data_size": 63488 00:31:42.900 }, 00:31:42.900 { 00:31:42.900 "name": "BaseBdev3", 00:31:42.900 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:42.900 "is_configured": true, 00:31:42.900 "data_offset": 2048, 00:31:42.900 "data_size": 63488 00:31:42.900 }, 00:31:42.900 { 00:31:42.900 "name": "BaseBdev4", 00:31:42.900 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:42.900 "is_configured": true, 00:31:42.900 "data_offset": 2048, 00:31:42.900 "data_size": 63488 00:31:42.900 } 00:31:42.900 ] 00:31:42.900 }' 00:31:42.900 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:43.159 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:43.159 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:43.159 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:43.159 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:43.417 [2024-07-25 00:58:05.844625] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:43.418 00:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:43.418 [2024-07-25 00:58:05.918326] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:31:43.418 [2024-07-25 00:58:05.920340] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:43.418 [2024-07-25 00:58:06.030055] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:43.418 [2024-07-25 00:58:06.031438] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:43.677 [2024-07-25 00:58:06.270175] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:43.677 [2024-07-25 00:58:06.271070] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:44.247 [2024-07-25 00:58:06.753636] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:44.506 00:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:44.506 00:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:44.506 00:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:44.506 00:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:44.506 00:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:44.506 00:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.506 00:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.506 [2024-07-25 00:58:07.086265] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:44.506 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:44.506 "name": "raid_bdev1", 00:31:44.506 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:44.506 "strip_size_kb": 0, 00:31:44.506 "state": "online", 00:31:44.506 "raid_level": "raid1", 00:31:44.506 "superblock": true, 00:31:44.506 "num_base_bdevs": 4, 00:31:44.506 "num_base_bdevs_discovered": 4, 00:31:44.506 "num_base_bdevs_operational": 4, 00:31:44.506 "process": { 00:31:44.506 "type": "rebuild", 00:31:44.506 "target": "spare", 00:31:44.506 "progress": { 00:31:44.506 "blocks": 12288, 00:31:44.506 "percent": 19 00:31:44.506 } 00:31:44.506 }, 00:31:44.506 "base_bdevs_list": [ 00:31:44.506 { 00:31:44.506 "name": "spare", 00:31:44.506 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:44.506 "is_configured": true, 00:31:44.506 "data_offset": 2048, 00:31:44.506 "data_size": 63488 00:31:44.506 }, 00:31:44.506 { 00:31:44.506 "name": "BaseBdev2", 00:31:44.506 "uuid": "39f0f380-45b0-5f43-b668-72d7b9098cb0", 00:31:44.506 "is_configured": true, 00:31:44.506 "data_offset": 2048, 00:31:44.506 "data_size": 63488 00:31:44.506 }, 00:31:44.506 { 00:31:44.506 "name": "BaseBdev3", 00:31:44.506 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:44.506 "is_configured": true, 00:31:44.506 "data_offset": 2048, 00:31:44.506 "data_size": 63488 00:31:44.506 }, 00:31:44.506 { 00:31:44.506 "name": "BaseBdev4", 00:31:44.506 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:44.506 "is_configured": true, 00:31:44.506 "data_offset": 2048, 00:31:44.506 "data_size": 63488 00:31:44.506 } 00:31:44.506 ] 00:31:44.506 }' 00:31:44.506 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:44.506 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:44.506 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:44.766 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:44.766 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:31:44.766 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:31:44.766 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:31:44.766 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:31:44.766 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:44.766 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:31:44.766 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:44.766 [2024-07-25 00:58:07.289385] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:45.024 [2024-07-25 00:58:07.420665] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:45.024 [2024-07-25 00:58:07.524257] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:45.283 [2024-07-25 00:58:07.731765] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:31:45.284 [2024-07-25 00:58:07.731918] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:31:45.284 [2024-07-25 00:58:07.731998] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:45.284 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:31:45.284 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:31:45.284 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:45.284 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:45.284 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:45.284 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:45.284 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:45.284 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.284 00:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.542 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:45.543 "name": "raid_bdev1", 00:31:45.543 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:45.543 "strip_size_kb": 0, 00:31:45.543 "state": "online", 00:31:45.543 "raid_level": "raid1", 00:31:45.543 "superblock": true, 00:31:45.543 "num_base_bdevs": 4, 00:31:45.543 "num_base_bdevs_discovered": 3, 00:31:45.543 "num_base_bdevs_operational": 3, 00:31:45.543 "process": { 00:31:45.543 "type": "rebuild", 00:31:45.543 "target": "spare", 00:31:45.543 "progress": { 00:31:45.543 "blocks": 22528, 00:31:45.543 "percent": 35 00:31:45.543 } 00:31:45.543 }, 00:31:45.543 "base_bdevs_list": [ 00:31:45.543 { 00:31:45.543 "name": "spare", 00:31:45.543 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:45.543 "is_configured": true, 00:31:45.543 "data_offset": 2048, 00:31:45.543 "data_size": 63488 00:31:45.543 }, 00:31:45.543 { 00:31:45.543 "name": null, 00:31:45.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:45.543 "is_configured": false, 00:31:45.543 "data_offset": 2048, 00:31:45.543 "data_size": 63488 00:31:45.543 }, 00:31:45.543 { 00:31:45.543 "name": "BaseBdev3", 00:31:45.543 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:45.543 "is_configured": true, 00:31:45.543 "data_offset": 2048, 00:31:45.543 "data_size": 63488 00:31:45.543 }, 00:31:45.543 { 00:31:45.543 "name": "BaseBdev4", 00:31:45.543 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:45.543 "is_configured": true, 00:31:45.543 "data_offset": 2048, 00:31:45.543 "data_size": 63488 00:31:45.543 } 00:31:45.543 ] 00:31:45.543 }' 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:45.543 [2024-07-25 00:58:08.095513] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=965 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.543 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.802 [2024-07-25 00:58:08.219108] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:31:45.802 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:45.802 "name": "raid_bdev1", 00:31:45.802 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:45.802 "strip_size_kb": 0, 00:31:45.802 "state": "online", 00:31:45.802 "raid_level": "raid1", 00:31:45.802 "superblock": true, 00:31:45.802 "num_base_bdevs": 4, 00:31:45.802 "num_base_bdevs_discovered": 3, 00:31:45.802 "num_base_bdevs_operational": 3, 00:31:45.802 "process": { 00:31:45.802 "type": "rebuild", 00:31:45.802 "target": "spare", 00:31:45.802 "progress": { 00:31:45.802 "blocks": 28672, 00:31:45.802 "percent": 45 00:31:45.802 } 00:31:45.802 }, 00:31:45.802 "base_bdevs_list": [ 00:31:45.802 { 00:31:45.802 "name": "spare", 00:31:45.802 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:45.802 "is_configured": true, 00:31:45.802 "data_offset": 2048, 00:31:45.802 "data_size": 63488 00:31:45.802 }, 00:31:45.802 { 00:31:45.802 "name": null, 00:31:45.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:45.802 "is_configured": false, 00:31:45.802 "data_offset": 2048, 00:31:45.802 "data_size": 63488 00:31:45.802 }, 00:31:45.802 { 00:31:45.802 "name": "BaseBdev3", 00:31:45.802 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:45.802 "is_configured": true, 00:31:45.802 "data_offset": 2048, 00:31:45.802 "data_size": 63488 00:31:45.802 }, 00:31:45.802 { 00:31:45.802 "name": "BaseBdev4", 00:31:45.802 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:45.802 "is_configured": true, 00:31:45.802 "data_offset": 2048, 00:31:45.802 "data_size": 63488 00:31:45.802 } 00:31:45.802 ] 00:31:45.802 }' 00:31:45.802 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:45.802 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:45.802 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:45.802 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:45.802 00:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:46.061 [2024-07-25 00:58:08.531905] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:31:46.629 [2024-07-25 00:58:09.227675] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:31:46.888 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:46.888 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:46.888 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:46.888 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:46.888 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:46.888 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:46.888 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.888 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.149 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:47.149 "name": "raid_bdev1", 00:31:47.149 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:47.149 "strip_size_kb": 0, 00:31:47.149 "state": "online", 00:31:47.149 "raid_level": "raid1", 00:31:47.149 "superblock": true, 00:31:47.149 "num_base_bdevs": 4, 00:31:47.149 "num_base_bdevs_discovered": 3, 00:31:47.149 "num_base_bdevs_operational": 3, 00:31:47.149 "process": { 00:31:47.149 "type": "rebuild", 00:31:47.149 "target": "spare", 00:31:47.149 "progress": { 00:31:47.149 "blocks": 47104, 00:31:47.149 "percent": 74 00:31:47.149 } 00:31:47.149 }, 00:31:47.149 "base_bdevs_list": [ 00:31:47.149 { 00:31:47.149 "name": "spare", 00:31:47.149 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:47.149 "is_configured": true, 00:31:47.149 "data_offset": 2048, 00:31:47.149 "data_size": 63488 00:31:47.149 }, 00:31:47.149 { 00:31:47.149 "name": null, 00:31:47.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.149 "is_configured": false, 00:31:47.149 "data_offset": 2048, 00:31:47.149 "data_size": 63488 00:31:47.149 }, 00:31:47.149 { 00:31:47.149 "name": "BaseBdev3", 00:31:47.149 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:47.149 "is_configured": true, 00:31:47.149 "data_offset": 2048, 00:31:47.149 "data_size": 63488 00:31:47.149 }, 00:31:47.149 { 00:31:47.149 "name": "BaseBdev4", 00:31:47.149 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:47.149 "is_configured": true, 00:31:47.149 "data_offset": 2048, 00:31:47.149 "data_size": 63488 00:31:47.149 } 00:31:47.149 ] 00:31:47.149 }' 00:31:47.149 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:47.149 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:47.149 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:47.149 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:47.149 00:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:47.716 [2024-07-25 00:58:10.318130] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:47.975 [2024-07-25 00:58:10.423291] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:47.975 [2024-07-25 00:58:10.426823] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:48.234 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:48.234 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:48.234 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:48.234 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:48.234 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:48.234 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:48.234 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.234 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:48.495 "name": "raid_bdev1", 00:31:48.495 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:48.495 "strip_size_kb": 0, 00:31:48.495 "state": "online", 00:31:48.495 "raid_level": "raid1", 00:31:48.495 "superblock": true, 00:31:48.495 "num_base_bdevs": 4, 00:31:48.495 "num_base_bdevs_discovered": 3, 00:31:48.495 "num_base_bdevs_operational": 3, 00:31:48.495 "base_bdevs_list": [ 00:31:48.495 { 00:31:48.495 "name": "spare", 00:31:48.495 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:48.495 "is_configured": true, 00:31:48.495 "data_offset": 2048, 00:31:48.495 "data_size": 63488 00:31:48.495 }, 00:31:48.495 { 00:31:48.495 "name": null, 00:31:48.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.495 "is_configured": false, 00:31:48.495 "data_offset": 2048, 00:31:48.495 "data_size": 63488 00:31:48.495 }, 00:31:48.495 { 00:31:48.495 "name": "BaseBdev3", 00:31:48.495 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:48.495 "is_configured": true, 00:31:48.495 "data_offset": 2048, 00:31:48.495 "data_size": 63488 00:31:48.495 }, 00:31:48.495 { 00:31:48.495 "name": "BaseBdev4", 00:31:48.495 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:48.495 "is_configured": true, 00:31:48.495 "data_offset": 2048, 00:31:48.495 "data_size": 63488 00:31:48.495 } 00:31:48.495 ] 00:31:48.495 }' 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.495 00:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:48.753 "name": "raid_bdev1", 00:31:48.753 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:48.753 "strip_size_kb": 0, 00:31:48.753 "state": "online", 00:31:48.753 "raid_level": "raid1", 00:31:48.753 "superblock": true, 00:31:48.753 "num_base_bdevs": 4, 00:31:48.753 "num_base_bdevs_discovered": 3, 00:31:48.753 "num_base_bdevs_operational": 3, 00:31:48.753 "base_bdevs_list": [ 00:31:48.753 { 00:31:48.753 "name": "spare", 00:31:48.753 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:48.753 "is_configured": true, 00:31:48.753 "data_offset": 2048, 00:31:48.753 "data_size": 63488 00:31:48.753 }, 00:31:48.753 { 00:31:48.753 "name": null, 00:31:48.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.753 "is_configured": false, 00:31:48.753 "data_offset": 2048, 00:31:48.753 "data_size": 63488 00:31:48.753 }, 00:31:48.753 { 00:31:48.753 "name": "BaseBdev3", 00:31:48.753 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:48.753 "is_configured": true, 00:31:48.753 "data_offset": 2048, 00:31:48.753 "data_size": 63488 00:31:48.753 }, 00:31:48.753 { 00:31:48.753 "name": "BaseBdev4", 00:31:48.753 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:48.753 "is_configured": true, 00:31:48.753 "data_offset": 2048, 00:31:48.753 "data_size": 63488 00:31:48.753 } 00:31:48.753 ] 00:31:48.753 }' 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.753 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:49.011 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:49.011 "name": "raid_bdev1", 00:31:49.011 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:49.011 "strip_size_kb": 0, 00:31:49.011 "state": "online", 00:31:49.011 "raid_level": "raid1", 00:31:49.011 "superblock": true, 00:31:49.011 "num_base_bdevs": 4, 00:31:49.011 "num_base_bdevs_discovered": 3, 00:31:49.011 "num_base_bdevs_operational": 3, 00:31:49.011 "base_bdevs_list": [ 00:31:49.011 { 00:31:49.011 "name": "spare", 00:31:49.011 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:49.011 "is_configured": true, 00:31:49.011 "data_offset": 2048, 00:31:49.011 "data_size": 63488 00:31:49.011 }, 00:31:49.011 { 00:31:49.011 "name": null, 00:31:49.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:49.011 "is_configured": false, 00:31:49.011 "data_offset": 2048, 00:31:49.011 "data_size": 63488 00:31:49.011 }, 00:31:49.011 { 00:31:49.011 "name": "BaseBdev3", 00:31:49.011 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:49.011 "is_configured": true, 00:31:49.011 "data_offset": 2048, 00:31:49.011 "data_size": 63488 00:31:49.011 }, 00:31:49.011 { 00:31:49.011 "name": "BaseBdev4", 00:31:49.011 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:49.011 "is_configured": true, 00:31:49.011 "data_offset": 2048, 00:31:49.011 "data_size": 63488 00:31:49.011 } 00:31:49.011 ] 00:31:49.011 }' 00:31:49.011 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:49.011 00:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:49.579 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:49.837 [2024-07-25 00:58:12.335137] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:49.837 [2024-07-25 00:58:12.335329] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:49.837 00:31:49.837 Latency(us) 00:31:49.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.837 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:49.837 raid_bdev1 : 11.00 113.31 339.94 0.00 0.00 12634.67 308.18 116342.00 00:31:49.837 =================================================================================================================== 00:31:49.837 Total : 113.31 339.94 0.00 0.00 12634.67 308.18 116342.00 00:31:49.837 [2024-07-25 00:58:12.399733] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:49.837 [2024-07-25 00:58:12.399878] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:49.837 0 00:31:49.837 [2024-07-25 00:58:12.400090] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:49.837 [2024-07-25 00:58:12.400104] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:31:49.837 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.837 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:50.095 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:50.353 /dev/nbd0 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:50.353 1+0 records in 00:31:50.353 1+0 records out 00:31:50.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276006 s, 14.8 MB/s 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:50.353 00:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:31:50.611 /dev/nbd1 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:50.611 1+0 records in 00:31:50.611 1+0 records out 00:31:50.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476559 s, 8.6 MB/s 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:50.611 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:50.877 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:31:51.449 /dev/nbd1 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:51.449 1+0 records in 00:31:51.449 1+0 records out 00:31:51.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586238 s, 7.0 MB/s 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:51.449 00:58:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:51.706 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:51.964 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:51.964 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:51.964 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:51.964 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:51.964 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:51.964 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:51.964 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:51.964 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:51.964 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:31:51.964 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:52.222 [2024-07-25 00:58:14.821214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:52.222 [2024-07-25 00:58:14.821477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:52.222 [2024-07-25 00:58:14.821575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:31:52.222 [2024-07-25 00:58:14.821701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:52.222 [2024-07-25 00:58:14.824023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:52.222 [2024-07-25 00:58:14.824192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:52.222 [2024-07-25 00:58:14.824399] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:52.222 [2024-07-25 00:58:14.824538] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:52.222 [2024-07-25 00:58:14.824721] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:52.222 [2024-07-25 00:58:14.825092] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:52.222 spare 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.222 00:58:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.480 [2024-07-25 00:58:14.925195] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:31:52.480 [2024-07-25 00:58:14.925316] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:52.480 [2024-07-25 00:58:14.925495] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000373d0 00:31:52.480 [2024-07-25 00:58:14.925930] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:31:52.480 [2024-07-25 00:58:14.926028] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:31:52.481 [2024-07-25 00:58:14.926272] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:52.481 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:52.481 "name": "raid_bdev1", 00:31:52.481 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:52.481 "strip_size_kb": 0, 00:31:52.481 "state": "online", 00:31:52.481 "raid_level": "raid1", 00:31:52.481 "superblock": true, 00:31:52.481 "num_base_bdevs": 4, 00:31:52.481 "num_base_bdevs_discovered": 3, 00:31:52.481 "num_base_bdevs_operational": 3, 00:31:52.481 "base_bdevs_list": [ 00:31:52.481 { 00:31:52.481 "name": "spare", 00:31:52.481 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:52.481 "is_configured": true, 00:31:52.481 "data_offset": 2048, 00:31:52.481 "data_size": 63488 00:31:52.481 }, 00:31:52.481 { 00:31:52.481 "name": null, 00:31:52.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:52.481 "is_configured": false, 00:31:52.481 "data_offset": 2048, 00:31:52.481 "data_size": 63488 00:31:52.481 }, 00:31:52.481 { 00:31:52.481 "name": "BaseBdev3", 00:31:52.481 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:52.481 "is_configured": true, 00:31:52.481 "data_offset": 2048, 00:31:52.481 "data_size": 63488 00:31:52.481 }, 00:31:52.481 { 00:31:52.481 "name": "BaseBdev4", 00:31:52.481 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:52.481 "is_configured": true, 00:31:52.481 "data_offset": 2048, 00:31:52.481 "data_size": 63488 00:31:52.481 } 00:31:52.481 ] 00:31:52.481 }' 00:31:52.481 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:52.481 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:53.046 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:53.046 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:53.046 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:53.046 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:53.046 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:53.046 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.046 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:53.303 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:53.303 "name": "raid_bdev1", 00:31:53.303 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:53.303 "strip_size_kb": 0, 00:31:53.303 "state": "online", 00:31:53.303 "raid_level": "raid1", 00:31:53.303 "superblock": true, 00:31:53.303 "num_base_bdevs": 4, 00:31:53.303 "num_base_bdevs_discovered": 3, 00:31:53.303 "num_base_bdevs_operational": 3, 00:31:53.303 "base_bdevs_list": [ 00:31:53.303 { 00:31:53.303 "name": "spare", 00:31:53.303 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:53.303 "is_configured": true, 00:31:53.303 "data_offset": 2048, 00:31:53.303 "data_size": 63488 00:31:53.303 }, 00:31:53.303 { 00:31:53.303 "name": null, 00:31:53.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:53.303 "is_configured": false, 00:31:53.303 "data_offset": 2048, 00:31:53.303 "data_size": 63488 00:31:53.303 }, 00:31:53.303 { 00:31:53.303 "name": "BaseBdev3", 00:31:53.303 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:53.303 "is_configured": true, 00:31:53.303 "data_offset": 2048, 00:31:53.303 "data_size": 63488 00:31:53.303 }, 00:31:53.303 { 00:31:53.303 "name": "BaseBdev4", 00:31:53.303 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:53.303 "is_configured": true, 00:31:53.303 "data_offset": 2048, 00:31:53.303 "data_size": 63488 00:31:53.303 } 00:31:53.303 ] 00:31:53.303 }' 00:31:53.303 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:53.303 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:53.303 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:53.303 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:53.303 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.303 00:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:53.560 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:31:53.560 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:53.818 [2024-07-25 00:58:16.346917] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.818 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.076 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:54.076 "name": "raid_bdev1", 00:31:54.076 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:54.076 "strip_size_kb": 0, 00:31:54.076 "state": "online", 00:31:54.076 "raid_level": "raid1", 00:31:54.076 "superblock": true, 00:31:54.076 "num_base_bdevs": 4, 00:31:54.076 "num_base_bdevs_discovered": 2, 00:31:54.076 "num_base_bdevs_operational": 2, 00:31:54.076 "base_bdevs_list": [ 00:31:54.076 { 00:31:54.076 "name": null, 00:31:54.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.076 "is_configured": false, 00:31:54.076 "data_offset": 2048, 00:31:54.076 "data_size": 63488 00:31:54.076 }, 00:31:54.076 { 00:31:54.076 "name": null, 00:31:54.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.076 "is_configured": false, 00:31:54.076 "data_offset": 2048, 00:31:54.076 "data_size": 63488 00:31:54.076 }, 00:31:54.076 { 00:31:54.076 "name": "BaseBdev3", 00:31:54.076 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:54.076 "is_configured": true, 00:31:54.076 "data_offset": 2048, 00:31:54.076 "data_size": 63488 00:31:54.076 }, 00:31:54.076 { 00:31:54.076 "name": "BaseBdev4", 00:31:54.076 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:54.076 "is_configured": true, 00:31:54.076 "data_offset": 2048, 00:31:54.076 "data_size": 63488 00:31:54.076 } 00:31:54.076 ] 00:31:54.076 }' 00:31:54.076 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:54.076 00:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:54.641 00:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:54.641 [2024-07-25 00:58:17.268176] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:54.641 [2024-07-25 00:58:17.268527] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:54.641 [2024-07-25 00:58:17.268632] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:54.641 [2024-07-25 00:58:17.268715] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:54.641 [2024-07-25 00:58:17.280717] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037570 00:31:54.641 [2024-07-25 00:58:17.282793] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:54.900 00:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:31:55.836 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:55.836 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:55.836 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:55.836 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:55.836 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:55.836 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.836 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.095 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:56.095 "name": "raid_bdev1", 00:31:56.095 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:56.095 "strip_size_kb": 0, 00:31:56.095 "state": "online", 00:31:56.095 "raid_level": "raid1", 00:31:56.095 "superblock": true, 00:31:56.095 "num_base_bdevs": 4, 00:31:56.095 "num_base_bdevs_discovered": 3, 00:31:56.095 "num_base_bdevs_operational": 3, 00:31:56.095 "process": { 00:31:56.095 "type": "rebuild", 00:31:56.095 "target": "spare", 00:31:56.095 "progress": { 00:31:56.095 "blocks": 24576, 00:31:56.095 "percent": 38 00:31:56.095 } 00:31:56.095 }, 00:31:56.095 "base_bdevs_list": [ 00:31:56.095 { 00:31:56.095 "name": "spare", 00:31:56.095 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:56.095 "is_configured": true, 00:31:56.095 "data_offset": 2048, 00:31:56.095 "data_size": 63488 00:31:56.095 }, 00:31:56.095 { 00:31:56.095 "name": null, 00:31:56.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.095 "is_configured": false, 00:31:56.095 "data_offset": 2048, 00:31:56.095 "data_size": 63488 00:31:56.095 }, 00:31:56.095 { 00:31:56.095 "name": "BaseBdev3", 00:31:56.095 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:56.095 "is_configured": true, 00:31:56.095 "data_offset": 2048, 00:31:56.095 "data_size": 63488 00:31:56.095 }, 00:31:56.095 { 00:31:56.095 "name": "BaseBdev4", 00:31:56.095 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:56.095 "is_configured": true, 00:31:56.095 "data_offset": 2048, 00:31:56.095 "data_size": 63488 00:31:56.095 } 00:31:56.095 ] 00:31:56.095 }' 00:31:56.095 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:56.095 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:56.095 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:56.095 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:56.095 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:56.355 [2024-07-25 00:58:18.765319] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:56.356 [2024-07-25 00:58:18.791253] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:56.356 [2024-07-25 00:58:18.791451] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:56.356 [2024-07-25 00:58:18.791499] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:56.356 [2024-07-25 00:58:18.791605] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.356 00:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.615 00:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:56.615 "name": "raid_bdev1", 00:31:56.615 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:56.615 "strip_size_kb": 0, 00:31:56.615 "state": "online", 00:31:56.615 "raid_level": "raid1", 00:31:56.615 "superblock": true, 00:31:56.615 "num_base_bdevs": 4, 00:31:56.615 "num_base_bdevs_discovered": 2, 00:31:56.615 "num_base_bdevs_operational": 2, 00:31:56.615 "base_bdevs_list": [ 00:31:56.615 { 00:31:56.615 "name": null, 00:31:56.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.615 "is_configured": false, 00:31:56.615 "data_offset": 2048, 00:31:56.615 "data_size": 63488 00:31:56.615 }, 00:31:56.615 { 00:31:56.615 "name": null, 00:31:56.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.615 "is_configured": false, 00:31:56.615 "data_offset": 2048, 00:31:56.615 "data_size": 63488 00:31:56.615 }, 00:31:56.615 { 00:31:56.615 "name": "BaseBdev3", 00:31:56.615 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:56.615 "is_configured": true, 00:31:56.615 "data_offset": 2048, 00:31:56.615 "data_size": 63488 00:31:56.615 }, 00:31:56.615 { 00:31:56.615 "name": "BaseBdev4", 00:31:56.615 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:56.615 "is_configured": true, 00:31:56.615 "data_offset": 2048, 00:31:56.615 "data_size": 63488 00:31:56.615 } 00:31:56.615 ] 00:31:56.615 }' 00:31:56.615 00:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:56.615 00:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.183 00:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:57.442 [2024-07-25 00:58:19.839868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:57.442 [2024-07-25 00:58:19.840155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:57.442 [2024-07-25 00:58:19.840226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:31:57.442 [2024-07-25 00:58:19.840321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:57.442 [2024-07-25 00:58:19.840817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:57.442 [2024-07-25 00:58:19.841000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:57.442 [2024-07-25 00:58:19.841198] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:57.442 [2024-07-25 00:58:19.841290] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:57.442 [2024-07-25 00:58:19.841357] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:57.442 [2024-07-25 00:58:19.841491] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:57.442 [2024-07-25 00:58:19.854188] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000378b0 00:31:57.442 spare 00:31:57.442 [2024-07-25 00:58:19.856232] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:57.442 00:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:31:58.377 00:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:58.377 00:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:58.377 00:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:58.377 00:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:58.377 00:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:58.377 00:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.377 00:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.635 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:58.635 "name": "raid_bdev1", 00:31:58.635 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:58.635 "strip_size_kb": 0, 00:31:58.635 "state": "online", 00:31:58.635 "raid_level": "raid1", 00:31:58.635 "superblock": true, 00:31:58.635 "num_base_bdevs": 4, 00:31:58.635 "num_base_bdevs_discovered": 3, 00:31:58.635 "num_base_bdevs_operational": 3, 00:31:58.635 "process": { 00:31:58.635 "type": "rebuild", 00:31:58.635 "target": "spare", 00:31:58.635 "progress": { 00:31:58.635 "blocks": 24576, 00:31:58.635 "percent": 38 00:31:58.635 } 00:31:58.635 }, 00:31:58.635 "base_bdevs_list": [ 00:31:58.635 { 00:31:58.635 "name": "spare", 00:31:58.635 "uuid": "f11f202b-678a-5589-84e6-ca8f513bfec0", 00:31:58.635 "is_configured": true, 00:31:58.635 "data_offset": 2048, 00:31:58.635 "data_size": 63488 00:31:58.635 }, 00:31:58.635 { 00:31:58.635 "name": null, 00:31:58.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.635 "is_configured": false, 00:31:58.635 "data_offset": 2048, 00:31:58.635 "data_size": 63488 00:31:58.635 }, 00:31:58.635 { 00:31:58.635 "name": "BaseBdev3", 00:31:58.635 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:58.635 "is_configured": true, 00:31:58.636 "data_offset": 2048, 00:31:58.636 "data_size": 63488 00:31:58.636 }, 00:31:58.636 { 00:31:58.636 "name": "BaseBdev4", 00:31:58.636 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:58.636 "is_configured": true, 00:31:58.636 "data_offset": 2048, 00:31:58.636 "data_size": 63488 00:31:58.636 } 00:31:58.636 ] 00:31:58.636 }' 00:31:58.636 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:58.636 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:58.636 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:58.636 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:58.636 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:58.894 [2024-07-25 00:58:21.430819] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:58.894 [2024-07-25 00:58:21.465227] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:58.894 [2024-07-25 00:58:21.465431] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:58.894 [2024-07-25 00:58:21.465480] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:58.894 [2024-07-25 00:58:21.465551] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.894 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.152 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:59.152 "name": "raid_bdev1", 00:31:59.152 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:59.152 "strip_size_kb": 0, 00:31:59.152 "state": "online", 00:31:59.152 "raid_level": "raid1", 00:31:59.152 "superblock": true, 00:31:59.152 "num_base_bdevs": 4, 00:31:59.152 "num_base_bdevs_discovered": 2, 00:31:59.152 "num_base_bdevs_operational": 2, 00:31:59.152 "base_bdevs_list": [ 00:31:59.152 { 00:31:59.152 "name": null, 00:31:59.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.152 "is_configured": false, 00:31:59.152 "data_offset": 2048, 00:31:59.152 "data_size": 63488 00:31:59.152 }, 00:31:59.152 { 00:31:59.152 "name": null, 00:31:59.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.152 "is_configured": false, 00:31:59.152 "data_offset": 2048, 00:31:59.152 "data_size": 63488 00:31:59.152 }, 00:31:59.152 { 00:31:59.152 "name": "BaseBdev3", 00:31:59.152 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:59.152 "is_configured": true, 00:31:59.152 "data_offset": 2048, 00:31:59.152 "data_size": 63488 00:31:59.152 }, 00:31:59.152 { 00:31:59.152 "name": "BaseBdev4", 00:31:59.152 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:59.152 "is_configured": true, 00:31:59.152 "data_offset": 2048, 00:31:59.152 "data_size": 63488 00:31:59.152 } 00:31:59.152 ] 00:31:59.152 }' 00:31:59.152 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:59.152 00:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:59.719 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:59.719 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:59.719 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:59.719 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:59.719 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:59.719 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.719 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.978 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:59.978 "name": "raid_bdev1", 00:31:59.978 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:31:59.978 "strip_size_kb": 0, 00:31:59.978 "state": "online", 00:31:59.978 "raid_level": "raid1", 00:31:59.978 "superblock": true, 00:31:59.978 "num_base_bdevs": 4, 00:31:59.978 "num_base_bdevs_discovered": 2, 00:31:59.978 "num_base_bdevs_operational": 2, 00:31:59.978 "base_bdevs_list": [ 00:31:59.978 { 00:31:59.978 "name": null, 00:31:59.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.978 "is_configured": false, 00:31:59.978 "data_offset": 2048, 00:31:59.978 "data_size": 63488 00:31:59.978 }, 00:31:59.978 { 00:31:59.978 "name": null, 00:31:59.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.978 "is_configured": false, 00:31:59.978 "data_offset": 2048, 00:31:59.978 "data_size": 63488 00:31:59.978 }, 00:31:59.978 { 00:31:59.978 "name": "BaseBdev3", 00:31:59.978 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:31:59.978 "is_configured": true, 00:31:59.978 "data_offset": 2048, 00:31:59.978 "data_size": 63488 00:31:59.978 }, 00:31:59.978 { 00:31:59.978 "name": "BaseBdev4", 00:31:59.978 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:31:59.978 "is_configured": true, 00:31:59.978 "data_offset": 2048, 00:31:59.978 "data_size": 63488 00:31:59.979 } 00:31:59.979 ] 00:31:59.979 }' 00:31:59.979 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:59.979 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:59.979 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:59.979 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:59.979 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:00.238 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:00.497 [2024-07-25 00:58:22.953762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:00.497 [2024-07-25 00:58:22.954003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:00.497 [2024-07-25 00:58:22.954151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:32:00.497 [2024-07-25 00:58:22.954295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:00.497 [2024-07-25 00:58:22.954754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:00.497 [2024-07-25 00:58:22.954893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:00.497 [2024-07-25 00:58:22.955126] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:00.497 [2024-07-25 00:58:22.955223] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:32:00.497 [2024-07-25 00:58:22.955295] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:00.497 BaseBdev1 00:32:00.497 00:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:01.433 00:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.692 00:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:01.692 "name": "raid_bdev1", 00:32:01.692 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:32:01.692 "strip_size_kb": 0, 00:32:01.692 "state": "online", 00:32:01.692 "raid_level": "raid1", 00:32:01.692 "superblock": true, 00:32:01.692 "num_base_bdevs": 4, 00:32:01.692 "num_base_bdevs_discovered": 2, 00:32:01.692 "num_base_bdevs_operational": 2, 00:32:01.692 "base_bdevs_list": [ 00:32:01.692 { 00:32:01.692 "name": null, 00:32:01.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:01.692 "is_configured": false, 00:32:01.692 "data_offset": 2048, 00:32:01.692 "data_size": 63488 00:32:01.692 }, 00:32:01.692 { 00:32:01.692 "name": null, 00:32:01.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:01.692 "is_configured": false, 00:32:01.692 "data_offset": 2048, 00:32:01.692 "data_size": 63488 00:32:01.692 }, 00:32:01.692 { 00:32:01.692 "name": "BaseBdev3", 00:32:01.692 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:32:01.692 "is_configured": true, 00:32:01.692 "data_offset": 2048, 00:32:01.692 "data_size": 63488 00:32:01.692 }, 00:32:01.692 { 00:32:01.692 "name": "BaseBdev4", 00:32:01.692 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:32:01.692 "is_configured": true, 00:32:01.692 "data_offset": 2048, 00:32:01.692 "data_size": 63488 00:32:01.692 } 00:32:01.692 ] 00:32:01.692 }' 00:32:01.692 00:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:01.692 00:58:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:02.259 00:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:02.260 00:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:02.260 00:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:02.260 00:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:02.260 00:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:02.260 00:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.260 00:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.518 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:02.518 "name": "raid_bdev1", 00:32:02.518 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:32:02.518 "strip_size_kb": 0, 00:32:02.518 "state": "online", 00:32:02.518 "raid_level": "raid1", 00:32:02.518 "superblock": true, 00:32:02.518 "num_base_bdevs": 4, 00:32:02.518 "num_base_bdevs_discovered": 2, 00:32:02.518 "num_base_bdevs_operational": 2, 00:32:02.518 "base_bdevs_list": [ 00:32:02.518 { 00:32:02.518 "name": null, 00:32:02.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.518 "is_configured": false, 00:32:02.518 "data_offset": 2048, 00:32:02.518 "data_size": 63488 00:32:02.518 }, 00:32:02.518 { 00:32:02.518 "name": null, 00:32:02.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.518 "is_configured": false, 00:32:02.518 "data_offset": 2048, 00:32:02.518 "data_size": 63488 00:32:02.518 }, 00:32:02.518 { 00:32:02.518 "name": "BaseBdev3", 00:32:02.518 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:32:02.518 "is_configured": true, 00:32:02.518 "data_offset": 2048, 00:32:02.518 "data_size": 63488 00:32:02.518 }, 00:32:02.518 { 00:32:02.518 "name": "BaseBdev4", 00:32:02.518 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:32:02.518 "is_configured": true, 00:32:02.518 "data_offset": 2048, 00:32:02.519 "data_size": 63488 00:32:02.519 } 00:32:02.519 ] 00:32:02.519 }' 00:32:02.519 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:02.519 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:02.519 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:02.778 [2024-07-25 00:58:25.354505] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:02.778 [2024-07-25 00:58:25.354794] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:32:02.778 [2024-07-25 00:58:25.354893] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:02.778 request: 00:32:02.778 { 00:32:02.778 "base_bdev": "BaseBdev1", 00:32:02.778 "raid_bdev": "raid_bdev1", 00:32:02.778 "method": "bdev_raid_add_base_bdev", 00:32:02.778 "req_id": 1 00:32:02.778 } 00:32:02.778 Got JSON-RPC error response 00:32:02.778 response: 00:32:02.778 { 00:32:02.778 "code": -22, 00:32:02.778 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:02.778 } 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:02.778 00:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:04.195 "name": "raid_bdev1", 00:32:04.195 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:32:04.195 "strip_size_kb": 0, 00:32:04.195 "state": "online", 00:32:04.195 "raid_level": "raid1", 00:32:04.195 "superblock": true, 00:32:04.195 "num_base_bdevs": 4, 00:32:04.195 "num_base_bdevs_discovered": 2, 00:32:04.195 "num_base_bdevs_operational": 2, 00:32:04.195 "base_bdevs_list": [ 00:32:04.195 { 00:32:04.195 "name": null, 00:32:04.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.195 "is_configured": false, 00:32:04.195 "data_offset": 2048, 00:32:04.195 "data_size": 63488 00:32:04.195 }, 00:32:04.195 { 00:32:04.195 "name": null, 00:32:04.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.195 "is_configured": false, 00:32:04.195 "data_offset": 2048, 00:32:04.195 "data_size": 63488 00:32:04.195 }, 00:32:04.195 { 00:32:04.195 "name": "BaseBdev3", 00:32:04.195 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:32:04.195 "is_configured": true, 00:32:04.195 "data_offset": 2048, 00:32:04.195 "data_size": 63488 00:32:04.195 }, 00:32:04.195 { 00:32:04.195 "name": "BaseBdev4", 00:32:04.195 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:32:04.195 "is_configured": true, 00:32:04.195 "data_offset": 2048, 00:32:04.195 "data_size": 63488 00:32:04.195 } 00:32:04.195 ] 00:32:04.195 }' 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:04.195 00:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:04.763 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:04.763 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:04.763 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:04.763 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:04.763 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:04.763 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.763 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.763 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:04.763 "name": "raid_bdev1", 00:32:04.763 "uuid": "bd34769d-49de-49a2-99e7-b03fc3b43840", 00:32:04.763 "strip_size_kb": 0, 00:32:04.763 "state": "online", 00:32:04.763 "raid_level": "raid1", 00:32:04.763 "superblock": true, 00:32:04.763 "num_base_bdevs": 4, 00:32:04.763 "num_base_bdevs_discovered": 2, 00:32:04.763 "num_base_bdevs_operational": 2, 00:32:04.763 "base_bdevs_list": [ 00:32:04.763 { 00:32:04.763 "name": null, 00:32:04.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.763 "is_configured": false, 00:32:04.763 "data_offset": 2048, 00:32:04.763 "data_size": 63488 00:32:04.763 }, 00:32:04.763 { 00:32:04.763 "name": null, 00:32:04.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.763 "is_configured": false, 00:32:04.763 "data_offset": 2048, 00:32:04.763 "data_size": 63488 00:32:04.763 }, 00:32:04.763 { 00:32:04.763 "name": "BaseBdev3", 00:32:04.763 "uuid": "3e1a2d9f-b382-5b39-9344-c074dc543b5e", 00:32:04.763 "is_configured": true, 00:32:04.763 "data_offset": 2048, 00:32:04.763 "data_size": 63488 00:32:04.763 }, 00:32:04.763 { 00:32:04.763 "name": "BaseBdev4", 00:32:04.763 "uuid": "ea509297-3636-5c60-8bf5-146a9ce1a113", 00:32:04.763 "is_configured": true, 00:32:04.763 "data_offset": 2048, 00:32:04.763 "data_size": 63488 00:32:04.763 } 00:32:04.763 ] 00:32:04.763 }' 00:32:04.763 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 149505 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 149505 ']' 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 149505 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149505 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149505' 00:32:05.022 killing process with pid 149505 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 149505 00:32:05.022 Received shutdown signal, test time was about 26.108288 seconds 00:32:05.022 00:32:05.022 Latency(us) 00:32:05.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.022 =================================================================================================================== 00:32:05.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:05.022 00:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 149505 00:32:05.022 [2024-07-25 00:58:27.491510] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:05.022 [2024-07-25 00:58:27.491766] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:05.022 [2024-07-25 00:58:27.491899] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:05.022 [2024-07-25 00:58:27.491987] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:32:05.281 [2024-07-25 00:58:27.877100] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:06.657 00:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:32:06.657 00:32:06.657 real 0m32.576s 00:32:06.657 user 0m50.402s 00:32:06.657 sys 0m4.425s 00:32:06.657 00:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:06.657 00:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:06.657 ************************************ 00:32:06.657 END TEST raid_rebuild_test_sb_io 00:32:06.657 ************************************ 00:32:06.657 00:58:29 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' y == y ']' 00:32:06.657 00:58:29 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:32:06.657 00:58:29 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:32:06.657 00:58:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:32:06.657 00:58:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.657 00:58:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:06.657 ************************************ 00:32:06.657 START TEST raid5f_state_function_test 00:32:06.657 ************************************ 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 false 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=150401 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 150401' 00:32:06.657 Process raid pid: 150401 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 150401 /var/tmp/spdk-raid.sock 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 150401 ']' 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:06.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:06.657 00:58:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.657 [2024-07-25 00:58:29.299728] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:32:06.658 [2024-07-25 00:58:29.300344] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.917 [2024-07-25 00:58:29.483688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.176 [2024-07-25 00:58:29.678960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.435 [2024-07-25 00:58:29.872051] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:07.694 00:58:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:07.694 00:58:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:32:07.694 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:07.953 [2024-07-25 00:58:30.441450] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:07.953 [2024-07-25 00:58:30.441688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:07.953 [2024-07-25 00:58:30.441771] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:07.953 [2024-07-25 00:58:30.441833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:07.953 [2024-07-25 00:58:30.442003] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:07.953 [2024-07-25 00:58:30.442051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:07.953 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:07.953 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:07.953 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:07.953 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:07.953 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:07.953 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:07.953 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:07.953 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:07.953 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:07.953 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:07.954 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:07.954 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:08.213 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:08.213 "name": "Existed_Raid", 00:32:08.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.213 "strip_size_kb": 64, 00:32:08.213 "state": "configuring", 00:32:08.213 "raid_level": "raid5f", 00:32:08.213 "superblock": false, 00:32:08.213 "num_base_bdevs": 3, 00:32:08.213 "num_base_bdevs_discovered": 0, 00:32:08.213 "num_base_bdevs_operational": 3, 00:32:08.213 "base_bdevs_list": [ 00:32:08.213 { 00:32:08.213 "name": "BaseBdev1", 00:32:08.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.213 "is_configured": false, 00:32:08.213 "data_offset": 0, 00:32:08.213 "data_size": 0 00:32:08.213 }, 00:32:08.213 { 00:32:08.213 "name": "BaseBdev2", 00:32:08.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.213 "is_configured": false, 00:32:08.213 "data_offset": 0, 00:32:08.213 "data_size": 0 00:32:08.213 }, 00:32:08.213 { 00:32:08.213 "name": "BaseBdev3", 00:32:08.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.213 "is_configured": false, 00:32:08.213 "data_offset": 0, 00:32:08.213 "data_size": 0 00:32:08.213 } 00:32:08.213 ] 00:32:08.213 }' 00:32:08.213 00:58:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:08.213 00:58:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.781 00:58:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:09.040 [2024-07-25 00:58:31.449484] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:09.040 [2024-07-25 00:58:31.449681] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:32:09.040 00:58:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:09.299 [2024-07-25 00:58:31.709561] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:09.299 [2024-07-25 00:58:31.709777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:09.299 [2024-07-25 00:58:31.709884] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:09.299 [2024-07-25 00:58:31.709974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:09.299 [2024-07-25 00:58:31.710040] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:09.299 [2024-07-25 00:58:31.710088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:09.299 00:58:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:09.558 [2024-07-25 00:58:31.992734] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:09.558 BaseBdev1 00:32:09.558 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:09.558 00:58:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:09.558 00:58:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:09.558 00:58:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:09.558 00:58:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:09.558 00:58:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:09.558 00:58:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:09.558 00:58:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:09.817 [ 00:32:09.817 { 00:32:09.817 "name": "BaseBdev1", 00:32:09.817 "aliases": [ 00:32:09.817 "d19aa543-d1d2-4046-8852-97981279c288" 00:32:09.817 ], 00:32:09.817 "product_name": "Malloc disk", 00:32:09.817 "block_size": 512, 00:32:09.817 "num_blocks": 65536, 00:32:09.817 "uuid": "d19aa543-d1d2-4046-8852-97981279c288", 00:32:09.817 "assigned_rate_limits": { 00:32:09.817 "rw_ios_per_sec": 0, 00:32:09.817 "rw_mbytes_per_sec": 0, 00:32:09.817 "r_mbytes_per_sec": 0, 00:32:09.817 "w_mbytes_per_sec": 0 00:32:09.817 }, 00:32:09.817 "claimed": true, 00:32:09.817 "claim_type": "exclusive_write", 00:32:09.817 "zoned": false, 00:32:09.817 "supported_io_types": { 00:32:09.817 "read": true, 00:32:09.817 "write": true, 00:32:09.817 "unmap": true, 00:32:09.817 "flush": true, 00:32:09.817 "reset": true, 00:32:09.817 "nvme_admin": false, 00:32:09.817 "nvme_io": false, 00:32:09.817 "nvme_io_md": false, 00:32:09.817 "write_zeroes": true, 00:32:09.817 "zcopy": true, 00:32:09.817 "get_zone_info": false, 00:32:09.817 "zone_management": false, 00:32:09.817 "zone_append": false, 00:32:09.817 "compare": false, 00:32:09.817 "compare_and_write": false, 00:32:09.817 "abort": true, 00:32:09.817 "seek_hole": false, 00:32:09.817 "seek_data": false, 00:32:09.817 "copy": true, 00:32:09.817 "nvme_iov_md": false 00:32:09.817 }, 00:32:09.817 "memory_domains": [ 00:32:09.817 { 00:32:09.817 "dma_device_id": "system", 00:32:09.818 "dma_device_type": 1 00:32:09.818 }, 00:32:09.818 { 00:32:09.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.818 "dma_device_type": 2 00:32:09.818 } 00:32:09.818 ], 00:32:09.818 "driver_specific": {} 00:32:09.818 } 00:32:09.818 ] 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:09.818 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:10.077 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:10.077 "name": "Existed_Raid", 00:32:10.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.077 "strip_size_kb": 64, 00:32:10.077 "state": "configuring", 00:32:10.077 "raid_level": "raid5f", 00:32:10.077 "superblock": false, 00:32:10.077 "num_base_bdevs": 3, 00:32:10.077 "num_base_bdevs_discovered": 1, 00:32:10.077 "num_base_bdevs_operational": 3, 00:32:10.077 "base_bdevs_list": [ 00:32:10.077 { 00:32:10.077 "name": "BaseBdev1", 00:32:10.077 "uuid": "d19aa543-d1d2-4046-8852-97981279c288", 00:32:10.077 "is_configured": true, 00:32:10.077 "data_offset": 0, 00:32:10.077 "data_size": 65536 00:32:10.077 }, 00:32:10.077 { 00:32:10.077 "name": "BaseBdev2", 00:32:10.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.077 "is_configured": false, 00:32:10.077 "data_offset": 0, 00:32:10.077 "data_size": 0 00:32:10.077 }, 00:32:10.077 { 00:32:10.077 "name": "BaseBdev3", 00:32:10.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.077 "is_configured": false, 00:32:10.077 "data_offset": 0, 00:32:10.077 "data_size": 0 00:32:10.077 } 00:32:10.077 ] 00:32:10.077 }' 00:32:10.077 00:58:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:10.077 00:58:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.644 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:10.644 [2024-07-25 00:58:33.288985] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:10.644 [2024-07-25 00:58:33.289176] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:32:10.911 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:10.911 [2024-07-25 00:58:33.553083] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:10.911 [2024-07-25 00:58:33.555159] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:10.911 [2024-07-25 00:58:33.555352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:10.911 [2024-07-25 00:58:33.555434] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:10.911 [2024-07-25 00:58:33.555516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:11.174 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:11.432 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:11.432 "name": "Existed_Raid", 00:32:11.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:11.432 "strip_size_kb": 64, 00:32:11.432 "state": "configuring", 00:32:11.432 "raid_level": "raid5f", 00:32:11.432 "superblock": false, 00:32:11.432 "num_base_bdevs": 3, 00:32:11.432 "num_base_bdevs_discovered": 1, 00:32:11.432 "num_base_bdevs_operational": 3, 00:32:11.432 "base_bdevs_list": [ 00:32:11.432 { 00:32:11.432 "name": "BaseBdev1", 00:32:11.433 "uuid": "d19aa543-d1d2-4046-8852-97981279c288", 00:32:11.433 "is_configured": true, 00:32:11.433 "data_offset": 0, 00:32:11.433 "data_size": 65536 00:32:11.433 }, 00:32:11.433 { 00:32:11.433 "name": "BaseBdev2", 00:32:11.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:11.433 "is_configured": false, 00:32:11.433 "data_offset": 0, 00:32:11.433 "data_size": 0 00:32:11.433 }, 00:32:11.433 { 00:32:11.433 "name": "BaseBdev3", 00:32:11.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:11.433 "is_configured": false, 00:32:11.433 "data_offset": 0, 00:32:11.433 "data_size": 0 00:32:11.433 } 00:32:11.433 ] 00:32:11.433 }' 00:32:11.433 00:58:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:11.433 00:58:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:12.001 00:58:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:12.001 [2024-07-25 00:58:34.596679] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:12.001 BaseBdev2 00:32:12.001 00:58:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:12.001 00:58:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:32:12.001 00:58:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:12.001 00:58:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:12.001 00:58:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:12.001 00:58:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:12.001 00:58:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:12.301 00:58:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:12.572 [ 00:32:12.572 { 00:32:12.572 "name": "BaseBdev2", 00:32:12.572 "aliases": [ 00:32:12.572 "a4b889fc-3611-49e9-8cd3-ebbc7d3c4e24" 00:32:12.572 ], 00:32:12.572 "product_name": "Malloc disk", 00:32:12.572 "block_size": 512, 00:32:12.572 "num_blocks": 65536, 00:32:12.572 "uuid": "a4b889fc-3611-49e9-8cd3-ebbc7d3c4e24", 00:32:12.572 "assigned_rate_limits": { 00:32:12.572 "rw_ios_per_sec": 0, 00:32:12.572 "rw_mbytes_per_sec": 0, 00:32:12.572 "r_mbytes_per_sec": 0, 00:32:12.572 "w_mbytes_per_sec": 0 00:32:12.572 }, 00:32:12.572 "claimed": true, 00:32:12.572 "claim_type": "exclusive_write", 00:32:12.572 "zoned": false, 00:32:12.572 "supported_io_types": { 00:32:12.572 "read": true, 00:32:12.572 "write": true, 00:32:12.572 "unmap": true, 00:32:12.572 "flush": true, 00:32:12.572 "reset": true, 00:32:12.572 "nvme_admin": false, 00:32:12.572 "nvme_io": false, 00:32:12.572 "nvme_io_md": false, 00:32:12.572 "write_zeroes": true, 00:32:12.572 "zcopy": true, 00:32:12.572 "get_zone_info": false, 00:32:12.572 "zone_management": false, 00:32:12.572 "zone_append": false, 00:32:12.572 "compare": false, 00:32:12.572 "compare_and_write": false, 00:32:12.572 "abort": true, 00:32:12.572 "seek_hole": false, 00:32:12.572 "seek_data": false, 00:32:12.572 "copy": true, 00:32:12.572 "nvme_iov_md": false 00:32:12.572 }, 00:32:12.572 "memory_domains": [ 00:32:12.572 { 00:32:12.572 "dma_device_id": "system", 00:32:12.572 "dma_device_type": 1 00:32:12.572 }, 00:32:12.572 { 00:32:12.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:12.572 "dma_device_type": 2 00:32:12.572 } 00:32:12.572 ], 00:32:12.572 "driver_specific": {} 00:32:12.572 } 00:32:12.572 ] 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.572 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:12.856 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:12.856 "name": "Existed_Raid", 00:32:12.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.856 "strip_size_kb": 64, 00:32:12.856 "state": "configuring", 00:32:12.856 "raid_level": "raid5f", 00:32:12.856 "superblock": false, 00:32:12.856 "num_base_bdevs": 3, 00:32:12.856 "num_base_bdevs_discovered": 2, 00:32:12.856 "num_base_bdevs_operational": 3, 00:32:12.856 "base_bdevs_list": [ 00:32:12.856 { 00:32:12.856 "name": "BaseBdev1", 00:32:12.856 "uuid": "d19aa543-d1d2-4046-8852-97981279c288", 00:32:12.856 "is_configured": true, 00:32:12.856 "data_offset": 0, 00:32:12.856 "data_size": 65536 00:32:12.856 }, 00:32:12.856 { 00:32:12.856 "name": "BaseBdev2", 00:32:12.856 "uuid": "a4b889fc-3611-49e9-8cd3-ebbc7d3c4e24", 00:32:12.856 "is_configured": true, 00:32:12.856 "data_offset": 0, 00:32:12.856 "data_size": 65536 00:32:12.856 }, 00:32:12.856 { 00:32:12.856 "name": "BaseBdev3", 00:32:12.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.856 "is_configured": false, 00:32:12.856 "data_offset": 0, 00:32:12.856 "data_size": 0 00:32:12.856 } 00:32:12.856 ] 00:32:12.856 }' 00:32:12.856 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:12.856 00:58:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:13.114 00:58:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:13.396 [2024-07-25 00:58:36.041586] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:13.396 [2024-07-25 00:58:36.041855] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:32:13.396 [2024-07-25 00:58:36.041897] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:32:13.396 [2024-07-25 00:58:36.042129] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:32:13.396 [2024-07-25 00:58:36.047864] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:32:13.396 [2024-07-25 00:58:36.048001] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:32:13.396 [2024-07-25 00:58:36.048332] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:13.729 BaseBdev3 00:32:13.729 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:32:13.729 00:58:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:32:13.729 00:58:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:13.729 00:58:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:13.729 00:58:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:13.729 00:58:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:13.729 00:58:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:13.729 00:58:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:14.008 [ 00:32:14.008 { 00:32:14.008 "name": "BaseBdev3", 00:32:14.008 "aliases": [ 00:32:14.008 "e61a9893-0063-4741-8de5-c238e95e0b92" 00:32:14.008 ], 00:32:14.008 "product_name": "Malloc disk", 00:32:14.009 "block_size": 512, 00:32:14.009 "num_blocks": 65536, 00:32:14.009 "uuid": "e61a9893-0063-4741-8de5-c238e95e0b92", 00:32:14.009 "assigned_rate_limits": { 00:32:14.009 "rw_ios_per_sec": 0, 00:32:14.009 "rw_mbytes_per_sec": 0, 00:32:14.009 "r_mbytes_per_sec": 0, 00:32:14.009 "w_mbytes_per_sec": 0 00:32:14.009 }, 00:32:14.009 "claimed": true, 00:32:14.009 "claim_type": "exclusive_write", 00:32:14.009 "zoned": false, 00:32:14.009 "supported_io_types": { 00:32:14.009 "read": true, 00:32:14.009 "write": true, 00:32:14.009 "unmap": true, 00:32:14.009 "flush": true, 00:32:14.009 "reset": true, 00:32:14.009 "nvme_admin": false, 00:32:14.009 "nvme_io": false, 00:32:14.009 "nvme_io_md": false, 00:32:14.009 "write_zeroes": true, 00:32:14.009 "zcopy": true, 00:32:14.009 "get_zone_info": false, 00:32:14.009 "zone_management": false, 00:32:14.009 "zone_append": false, 00:32:14.009 "compare": false, 00:32:14.009 "compare_and_write": false, 00:32:14.009 "abort": true, 00:32:14.009 "seek_hole": false, 00:32:14.009 "seek_data": false, 00:32:14.009 "copy": true, 00:32:14.009 "nvme_iov_md": false 00:32:14.009 }, 00:32:14.009 "memory_domains": [ 00:32:14.009 { 00:32:14.009 "dma_device_id": "system", 00:32:14.009 "dma_device_type": 1 00:32:14.009 }, 00:32:14.009 { 00:32:14.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:14.009 "dma_device_type": 2 00:32:14.009 } 00:32:14.009 ], 00:32:14.009 "driver_specific": {} 00:32:14.009 } 00:32:14.009 ] 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:14.009 "name": "Existed_Raid", 00:32:14.009 "uuid": "1a6cea3f-1ea8-42df-9a91-fba3f1c1d0bf", 00:32:14.009 "strip_size_kb": 64, 00:32:14.009 "state": "online", 00:32:14.009 "raid_level": "raid5f", 00:32:14.009 "superblock": false, 00:32:14.009 "num_base_bdevs": 3, 00:32:14.009 "num_base_bdevs_discovered": 3, 00:32:14.009 "num_base_bdevs_operational": 3, 00:32:14.009 "base_bdevs_list": [ 00:32:14.009 { 00:32:14.009 "name": "BaseBdev1", 00:32:14.009 "uuid": "d19aa543-d1d2-4046-8852-97981279c288", 00:32:14.009 "is_configured": true, 00:32:14.009 "data_offset": 0, 00:32:14.009 "data_size": 65536 00:32:14.009 }, 00:32:14.009 { 00:32:14.009 "name": "BaseBdev2", 00:32:14.009 "uuid": "a4b889fc-3611-49e9-8cd3-ebbc7d3c4e24", 00:32:14.009 "is_configured": true, 00:32:14.009 "data_offset": 0, 00:32:14.009 "data_size": 65536 00:32:14.009 }, 00:32:14.009 { 00:32:14.009 "name": "BaseBdev3", 00:32:14.009 "uuid": "e61a9893-0063-4741-8de5-c238e95e0b92", 00:32:14.009 "is_configured": true, 00:32:14.009 "data_offset": 0, 00:32:14.009 "data_size": 65536 00:32:14.009 } 00:32:14.009 ] 00:32:14.009 }' 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:14.009 00:58:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:14.682 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:14.682 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:14.682 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:14.682 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:14.682 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:14.682 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:14.682 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:14.682 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:14.962 [2024-07-25 00:58:37.464203] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:14.962 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:14.962 "name": "Existed_Raid", 00:32:14.962 "aliases": [ 00:32:14.962 "1a6cea3f-1ea8-42df-9a91-fba3f1c1d0bf" 00:32:14.962 ], 00:32:14.962 "product_name": "Raid Volume", 00:32:14.962 "block_size": 512, 00:32:14.962 "num_blocks": 131072, 00:32:14.962 "uuid": "1a6cea3f-1ea8-42df-9a91-fba3f1c1d0bf", 00:32:14.962 "assigned_rate_limits": { 00:32:14.962 "rw_ios_per_sec": 0, 00:32:14.962 "rw_mbytes_per_sec": 0, 00:32:14.962 "r_mbytes_per_sec": 0, 00:32:14.962 "w_mbytes_per_sec": 0 00:32:14.962 }, 00:32:14.962 "claimed": false, 00:32:14.962 "zoned": false, 00:32:14.962 "supported_io_types": { 00:32:14.962 "read": true, 00:32:14.962 "write": true, 00:32:14.962 "unmap": false, 00:32:14.962 "flush": false, 00:32:14.962 "reset": true, 00:32:14.962 "nvme_admin": false, 00:32:14.962 "nvme_io": false, 00:32:14.962 "nvme_io_md": false, 00:32:14.962 "write_zeroes": true, 00:32:14.962 "zcopy": false, 00:32:14.962 "get_zone_info": false, 00:32:14.962 "zone_management": false, 00:32:14.962 "zone_append": false, 00:32:14.962 "compare": false, 00:32:14.962 "compare_and_write": false, 00:32:14.962 "abort": false, 00:32:14.962 "seek_hole": false, 00:32:14.962 "seek_data": false, 00:32:14.962 "copy": false, 00:32:14.962 "nvme_iov_md": false 00:32:14.962 }, 00:32:14.962 "driver_specific": { 00:32:14.962 "raid": { 00:32:14.962 "uuid": "1a6cea3f-1ea8-42df-9a91-fba3f1c1d0bf", 00:32:14.962 "strip_size_kb": 64, 00:32:14.962 "state": "online", 00:32:14.962 "raid_level": "raid5f", 00:32:14.962 "superblock": false, 00:32:14.962 "num_base_bdevs": 3, 00:32:14.962 "num_base_bdevs_discovered": 3, 00:32:14.962 "num_base_bdevs_operational": 3, 00:32:14.962 "base_bdevs_list": [ 00:32:14.962 { 00:32:14.962 "name": "BaseBdev1", 00:32:14.962 "uuid": "d19aa543-d1d2-4046-8852-97981279c288", 00:32:14.962 "is_configured": true, 00:32:14.962 "data_offset": 0, 00:32:14.962 "data_size": 65536 00:32:14.962 }, 00:32:14.962 { 00:32:14.962 "name": "BaseBdev2", 00:32:14.962 "uuid": "a4b889fc-3611-49e9-8cd3-ebbc7d3c4e24", 00:32:14.962 "is_configured": true, 00:32:14.962 "data_offset": 0, 00:32:14.962 "data_size": 65536 00:32:14.962 }, 00:32:14.962 { 00:32:14.962 "name": "BaseBdev3", 00:32:14.962 "uuid": "e61a9893-0063-4741-8de5-c238e95e0b92", 00:32:14.962 "is_configured": true, 00:32:14.962 "data_offset": 0, 00:32:14.962 "data_size": 65536 00:32:14.962 } 00:32:14.962 ] 00:32:14.962 } 00:32:14.962 } 00:32:14.962 }' 00:32:14.962 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:14.962 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:14.962 BaseBdev2 00:32:14.962 BaseBdev3' 00:32:14.962 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:14.962 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:14.962 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:15.220 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:15.220 "name": "BaseBdev1", 00:32:15.220 "aliases": [ 00:32:15.220 "d19aa543-d1d2-4046-8852-97981279c288" 00:32:15.220 ], 00:32:15.220 "product_name": "Malloc disk", 00:32:15.220 "block_size": 512, 00:32:15.220 "num_blocks": 65536, 00:32:15.220 "uuid": "d19aa543-d1d2-4046-8852-97981279c288", 00:32:15.220 "assigned_rate_limits": { 00:32:15.220 "rw_ios_per_sec": 0, 00:32:15.220 "rw_mbytes_per_sec": 0, 00:32:15.220 "r_mbytes_per_sec": 0, 00:32:15.220 "w_mbytes_per_sec": 0 00:32:15.220 }, 00:32:15.220 "claimed": true, 00:32:15.220 "claim_type": "exclusive_write", 00:32:15.220 "zoned": false, 00:32:15.220 "supported_io_types": { 00:32:15.220 "read": true, 00:32:15.220 "write": true, 00:32:15.220 "unmap": true, 00:32:15.220 "flush": true, 00:32:15.220 "reset": true, 00:32:15.220 "nvme_admin": false, 00:32:15.220 "nvme_io": false, 00:32:15.220 "nvme_io_md": false, 00:32:15.220 "write_zeroes": true, 00:32:15.220 "zcopy": true, 00:32:15.220 "get_zone_info": false, 00:32:15.220 "zone_management": false, 00:32:15.220 "zone_append": false, 00:32:15.220 "compare": false, 00:32:15.220 "compare_and_write": false, 00:32:15.220 "abort": true, 00:32:15.220 "seek_hole": false, 00:32:15.220 "seek_data": false, 00:32:15.220 "copy": true, 00:32:15.220 "nvme_iov_md": false 00:32:15.220 }, 00:32:15.220 "memory_domains": [ 00:32:15.220 { 00:32:15.220 "dma_device_id": "system", 00:32:15.220 "dma_device_type": 1 00:32:15.220 }, 00:32:15.220 { 00:32:15.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:15.220 "dma_device_type": 2 00:32:15.220 } 00:32:15.220 ], 00:32:15.220 "driver_specific": {} 00:32:15.220 }' 00:32:15.220 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:15.220 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:15.220 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:15.220 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:15.220 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:15.220 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:15.220 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:15.478 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:15.478 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:15.478 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.478 00:58:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.479 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:15.479 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:15.479 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:15.479 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:15.736 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:15.736 "name": "BaseBdev2", 00:32:15.736 "aliases": [ 00:32:15.736 "a4b889fc-3611-49e9-8cd3-ebbc7d3c4e24" 00:32:15.736 ], 00:32:15.736 "product_name": "Malloc disk", 00:32:15.736 "block_size": 512, 00:32:15.736 "num_blocks": 65536, 00:32:15.736 "uuid": "a4b889fc-3611-49e9-8cd3-ebbc7d3c4e24", 00:32:15.736 "assigned_rate_limits": { 00:32:15.736 "rw_ios_per_sec": 0, 00:32:15.736 "rw_mbytes_per_sec": 0, 00:32:15.736 "r_mbytes_per_sec": 0, 00:32:15.736 "w_mbytes_per_sec": 0 00:32:15.736 }, 00:32:15.736 "claimed": true, 00:32:15.736 "claim_type": "exclusive_write", 00:32:15.736 "zoned": false, 00:32:15.736 "supported_io_types": { 00:32:15.736 "read": true, 00:32:15.736 "write": true, 00:32:15.736 "unmap": true, 00:32:15.736 "flush": true, 00:32:15.736 "reset": true, 00:32:15.736 "nvme_admin": false, 00:32:15.736 "nvme_io": false, 00:32:15.736 "nvme_io_md": false, 00:32:15.736 "write_zeroes": true, 00:32:15.736 "zcopy": true, 00:32:15.736 "get_zone_info": false, 00:32:15.736 "zone_management": false, 00:32:15.736 "zone_append": false, 00:32:15.736 "compare": false, 00:32:15.736 "compare_and_write": false, 00:32:15.736 "abort": true, 00:32:15.736 "seek_hole": false, 00:32:15.736 "seek_data": false, 00:32:15.736 "copy": true, 00:32:15.736 "nvme_iov_md": false 00:32:15.736 }, 00:32:15.736 "memory_domains": [ 00:32:15.736 { 00:32:15.736 "dma_device_id": "system", 00:32:15.736 "dma_device_type": 1 00:32:15.736 }, 00:32:15.737 { 00:32:15.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:15.737 "dma_device_type": 2 00:32:15.737 } 00:32:15.737 ], 00:32:15.737 "driver_specific": {} 00:32:15.737 }' 00:32:15.737 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:15.737 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:15.737 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:15.737 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:15.737 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:15.994 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:15.994 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:15.994 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:15.994 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:15.994 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.994 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.994 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:15.994 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:15.994 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:15.994 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:16.251 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:16.251 "name": "BaseBdev3", 00:32:16.251 "aliases": [ 00:32:16.251 "e61a9893-0063-4741-8de5-c238e95e0b92" 00:32:16.251 ], 00:32:16.251 "product_name": "Malloc disk", 00:32:16.251 "block_size": 512, 00:32:16.251 "num_blocks": 65536, 00:32:16.251 "uuid": "e61a9893-0063-4741-8de5-c238e95e0b92", 00:32:16.251 "assigned_rate_limits": { 00:32:16.251 "rw_ios_per_sec": 0, 00:32:16.251 "rw_mbytes_per_sec": 0, 00:32:16.251 "r_mbytes_per_sec": 0, 00:32:16.251 "w_mbytes_per_sec": 0 00:32:16.251 }, 00:32:16.251 "claimed": true, 00:32:16.251 "claim_type": "exclusive_write", 00:32:16.251 "zoned": false, 00:32:16.251 "supported_io_types": { 00:32:16.251 "read": true, 00:32:16.251 "write": true, 00:32:16.251 "unmap": true, 00:32:16.251 "flush": true, 00:32:16.251 "reset": true, 00:32:16.251 "nvme_admin": false, 00:32:16.251 "nvme_io": false, 00:32:16.251 "nvme_io_md": false, 00:32:16.251 "write_zeroes": true, 00:32:16.251 "zcopy": true, 00:32:16.251 "get_zone_info": false, 00:32:16.251 "zone_management": false, 00:32:16.251 "zone_append": false, 00:32:16.251 "compare": false, 00:32:16.251 "compare_and_write": false, 00:32:16.251 "abort": true, 00:32:16.251 "seek_hole": false, 00:32:16.251 "seek_data": false, 00:32:16.251 "copy": true, 00:32:16.251 "nvme_iov_md": false 00:32:16.251 }, 00:32:16.251 "memory_domains": [ 00:32:16.251 { 00:32:16.251 "dma_device_id": "system", 00:32:16.251 "dma_device_type": 1 00:32:16.251 }, 00:32:16.251 { 00:32:16.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.251 "dma_device_type": 2 00:32:16.251 } 00:32:16.251 ], 00:32:16.251 "driver_specific": {} 00:32:16.251 }' 00:32:16.251 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:16.507 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:16.507 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:16.507 00:58:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:16.507 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:16.507 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:16.507 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:16.507 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:16.507 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:16.507 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:16.765 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:16.765 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:16.765 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:17.022 [2024-07-25 00:58:39.476410] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:17.022 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:17.022 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:32:17.022 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:17.022 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:32:17.022 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:17.022 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:32:17.022 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:17.022 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:17.023 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:17.023 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:17.023 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:17.023 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:17.023 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:17.023 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:17.023 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:17.023 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.023 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:17.280 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:17.280 "name": "Existed_Raid", 00:32:17.280 "uuid": "1a6cea3f-1ea8-42df-9a91-fba3f1c1d0bf", 00:32:17.280 "strip_size_kb": 64, 00:32:17.280 "state": "online", 00:32:17.280 "raid_level": "raid5f", 00:32:17.280 "superblock": false, 00:32:17.280 "num_base_bdevs": 3, 00:32:17.280 "num_base_bdevs_discovered": 2, 00:32:17.280 "num_base_bdevs_operational": 2, 00:32:17.280 "base_bdevs_list": [ 00:32:17.280 { 00:32:17.280 "name": null, 00:32:17.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.280 "is_configured": false, 00:32:17.280 "data_offset": 0, 00:32:17.280 "data_size": 65536 00:32:17.280 }, 00:32:17.280 { 00:32:17.280 "name": "BaseBdev2", 00:32:17.280 "uuid": "a4b889fc-3611-49e9-8cd3-ebbc7d3c4e24", 00:32:17.280 "is_configured": true, 00:32:17.280 "data_offset": 0, 00:32:17.280 "data_size": 65536 00:32:17.280 }, 00:32:17.280 { 00:32:17.280 "name": "BaseBdev3", 00:32:17.280 "uuid": "e61a9893-0063-4741-8de5-c238e95e0b92", 00:32:17.280 "is_configured": true, 00:32:17.280 "data_offset": 0, 00:32:17.280 "data_size": 65536 00:32:17.280 } 00:32:17.280 ] 00:32:17.280 }' 00:32:17.280 00:58:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:17.280 00:58:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:17.846 00:58:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:17.846 00:58:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.846 00:58:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:18.104 00:58:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:18.104 00:58:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:18.104 00:58:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:18.362 [2024-07-25 00:58:40.994510] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:18.362 [2024-07-25 00:58:40.994751] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:18.621 [2024-07-25 00:58:41.095737] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:18.621 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:18.621 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:18.621 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.621 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:18.879 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:18.879 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:18.879 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:32:18.879 [2024-07-25 00:58:41.511845] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:18.879 [2024-07-25 00:58:41.512100] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:32:19.137 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:19.137 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:19.137 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.137 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:19.396 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:19.396 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:19.396 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:32:19.396 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:32:19.396 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:19.396 00:58:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:19.655 BaseBdev2 00:32:19.655 00:58:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:32:19.655 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:32:19.655 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:19.655 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:19.655 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:19.655 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:19.655 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:19.655 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:19.913 [ 00:32:19.913 { 00:32:19.913 "name": "BaseBdev2", 00:32:19.913 "aliases": [ 00:32:19.913 "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa" 00:32:19.913 ], 00:32:19.913 "product_name": "Malloc disk", 00:32:19.913 "block_size": 512, 00:32:19.913 "num_blocks": 65536, 00:32:19.913 "uuid": "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa", 00:32:19.913 "assigned_rate_limits": { 00:32:19.913 "rw_ios_per_sec": 0, 00:32:19.913 "rw_mbytes_per_sec": 0, 00:32:19.913 "r_mbytes_per_sec": 0, 00:32:19.913 "w_mbytes_per_sec": 0 00:32:19.913 }, 00:32:19.913 "claimed": false, 00:32:19.913 "zoned": false, 00:32:19.913 "supported_io_types": { 00:32:19.913 "read": true, 00:32:19.913 "write": true, 00:32:19.913 "unmap": true, 00:32:19.913 "flush": true, 00:32:19.913 "reset": true, 00:32:19.913 "nvme_admin": false, 00:32:19.913 "nvme_io": false, 00:32:19.913 "nvme_io_md": false, 00:32:19.913 "write_zeroes": true, 00:32:19.913 "zcopy": true, 00:32:19.913 "get_zone_info": false, 00:32:19.913 "zone_management": false, 00:32:19.913 "zone_append": false, 00:32:19.913 "compare": false, 00:32:19.913 "compare_and_write": false, 00:32:19.913 "abort": true, 00:32:19.913 "seek_hole": false, 00:32:19.913 "seek_data": false, 00:32:19.913 "copy": true, 00:32:19.913 "nvme_iov_md": false 00:32:19.913 }, 00:32:19.913 "memory_domains": [ 00:32:19.913 { 00:32:19.913 "dma_device_id": "system", 00:32:19.913 "dma_device_type": 1 00:32:19.913 }, 00:32:19.913 { 00:32:19.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:19.913 "dma_device_type": 2 00:32:19.913 } 00:32:19.913 ], 00:32:19.913 "driver_specific": {} 00:32:19.913 } 00:32:19.913 ] 00:32:19.913 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:19.913 00:58:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:19.913 00:58:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:19.913 00:58:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:20.172 BaseBdev3 00:32:20.172 00:58:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:32:20.172 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:32:20.172 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:20.172 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:20.172 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:20.172 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:20.172 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:20.172 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:20.430 [ 00:32:20.430 { 00:32:20.430 "name": "BaseBdev3", 00:32:20.430 "aliases": [ 00:32:20.430 "9d946fe3-4a11-4bbf-821c-6786e8264895" 00:32:20.430 ], 00:32:20.430 "product_name": "Malloc disk", 00:32:20.430 "block_size": 512, 00:32:20.430 "num_blocks": 65536, 00:32:20.430 "uuid": "9d946fe3-4a11-4bbf-821c-6786e8264895", 00:32:20.430 "assigned_rate_limits": { 00:32:20.430 "rw_ios_per_sec": 0, 00:32:20.430 "rw_mbytes_per_sec": 0, 00:32:20.430 "r_mbytes_per_sec": 0, 00:32:20.430 "w_mbytes_per_sec": 0 00:32:20.430 }, 00:32:20.430 "claimed": false, 00:32:20.430 "zoned": false, 00:32:20.430 "supported_io_types": { 00:32:20.431 "read": true, 00:32:20.431 "write": true, 00:32:20.431 "unmap": true, 00:32:20.431 "flush": true, 00:32:20.431 "reset": true, 00:32:20.431 "nvme_admin": false, 00:32:20.431 "nvme_io": false, 00:32:20.431 "nvme_io_md": false, 00:32:20.431 "write_zeroes": true, 00:32:20.431 "zcopy": true, 00:32:20.431 "get_zone_info": false, 00:32:20.431 "zone_management": false, 00:32:20.431 "zone_append": false, 00:32:20.431 "compare": false, 00:32:20.431 "compare_and_write": false, 00:32:20.431 "abort": true, 00:32:20.431 "seek_hole": false, 00:32:20.431 "seek_data": false, 00:32:20.431 "copy": true, 00:32:20.431 "nvme_iov_md": false 00:32:20.431 }, 00:32:20.431 "memory_domains": [ 00:32:20.431 { 00:32:20.431 "dma_device_id": "system", 00:32:20.431 "dma_device_type": 1 00:32:20.431 }, 00:32:20.431 { 00:32:20.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:20.431 "dma_device_type": 2 00:32:20.431 } 00:32:20.431 ], 00:32:20.431 "driver_specific": {} 00:32:20.431 } 00:32:20.431 ] 00:32:20.431 00:58:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:20.431 00:58:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:20.431 00:58:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:20.431 00:58:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:20.690 [2024-07-25 00:58:43.146710] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:20.690 [2024-07-25 00:58:43.146780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:20.690 [2024-07-25 00:58:43.146836] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:20.690 [2024-07-25 00:58:43.148747] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.690 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:20.948 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:20.948 "name": "Existed_Raid", 00:32:20.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.948 "strip_size_kb": 64, 00:32:20.948 "state": "configuring", 00:32:20.948 "raid_level": "raid5f", 00:32:20.948 "superblock": false, 00:32:20.948 "num_base_bdevs": 3, 00:32:20.948 "num_base_bdevs_discovered": 2, 00:32:20.948 "num_base_bdevs_operational": 3, 00:32:20.948 "base_bdevs_list": [ 00:32:20.948 { 00:32:20.948 "name": "BaseBdev1", 00:32:20.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.948 "is_configured": false, 00:32:20.948 "data_offset": 0, 00:32:20.948 "data_size": 0 00:32:20.948 }, 00:32:20.948 { 00:32:20.948 "name": "BaseBdev2", 00:32:20.948 "uuid": "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa", 00:32:20.948 "is_configured": true, 00:32:20.948 "data_offset": 0, 00:32:20.948 "data_size": 65536 00:32:20.948 }, 00:32:20.948 { 00:32:20.948 "name": "BaseBdev3", 00:32:20.948 "uuid": "9d946fe3-4a11-4bbf-821c-6786e8264895", 00:32:20.949 "is_configured": true, 00:32:20.949 "data_offset": 0, 00:32:20.949 "data_size": 65536 00:32:20.949 } 00:32:20.949 ] 00:32:20.949 }' 00:32:20.949 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:20.949 00:58:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.516 00:58:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:21.516 [2024-07-25 00:58:44.079029] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:21.516 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.774 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:21.774 "name": "Existed_Raid", 00:32:21.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.774 "strip_size_kb": 64, 00:32:21.774 "state": "configuring", 00:32:21.774 "raid_level": "raid5f", 00:32:21.774 "superblock": false, 00:32:21.774 "num_base_bdevs": 3, 00:32:21.774 "num_base_bdevs_discovered": 1, 00:32:21.774 "num_base_bdevs_operational": 3, 00:32:21.774 "base_bdevs_list": [ 00:32:21.774 { 00:32:21.774 "name": "BaseBdev1", 00:32:21.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.774 "is_configured": false, 00:32:21.774 "data_offset": 0, 00:32:21.774 "data_size": 0 00:32:21.774 }, 00:32:21.774 { 00:32:21.774 "name": null, 00:32:21.774 "uuid": "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa", 00:32:21.774 "is_configured": false, 00:32:21.774 "data_offset": 0, 00:32:21.774 "data_size": 65536 00:32:21.774 }, 00:32:21.774 { 00:32:21.774 "name": "BaseBdev3", 00:32:21.774 "uuid": "9d946fe3-4a11-4bbf-821c-6786e8264895", 00:32:21.774 "is_configured": true, 00:32:21.774 "data_offset": 0, 00:32:21.774 "data_size": 65536 00:32:21.774 } 00:32:21.774 ] 00:32:21.774 }' 00:32:21.774 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:21.774 00:58:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.341 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:22.341 00:58:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:22.600 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:32:22.600 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:22.858 [2024-07-25 00:58:45.359229] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:22.858 BaseBdev1 00:32:22.858 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:32:22.858 00:58:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:22.858 00:58:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:22.858 00:58:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:22.858 00:58:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:22.858 00:58:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:22.858 00:58:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:23.116 00:58:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:23.116 [ 00:32:23.116 { 00:32:23.116 "name": "BaseBdev1", 00:32:23.116 "aliases": [ 00:32:23.116 "1251cc00-5481-4212-a7e3-f47ec0b7b43a" 00:32:23.116 ], 00:32:23.116 "product_name": "Malloc disk", 00:32:23.116 "block_size": 512, 00:32:23.116 "num_blocks": 65536, 00:32:23.116 "uuid": "1251cc00-5481-4212-a7e3-f47ec0b7b43a", 00:32:23.116 "assigned_rate_limits": { 00:32:23.116 "rw_ios_per_sec": 0, 00:32:23.116 "rw_mbytes_per_sec": 0, 00:32:23.116 "r_mbytes_per_sec": 0, 00:32:23.116 "w_mbytes_per_sec": 0 00:32:23.116 }, 00:32:23.116 "claimed": true, 00:32:23.116 "claim_type": "exclusive_write", 00:32:23.116 "zoned": false, 00:32:23.116 "supported_io_types": { 00:32:23.116 "read": true, 00:32:23.116 "write": true, 00:32:23.116 "unmap": true, 00:32:23.117 "flush": true, 00:32:23.117 "reset": true, 00:32:23.117 "nvme_admin": false, 00:32:23.117 "nvme_io": false, 00:32:23.117 "nvme_io_md": false, 00:32:23.117 "write_zeroes": true, 00:32:23.117 "zcopy": true, 00:32:23.117 "get_zone_info": false, 00:32:23.117 "zone_management": false, 00:32:23.117 "zone_append": false, 00:32:23.117 "compare": false, 00:32:23.117 "compare_and_write": false, 00:32:23.117 "abort": true, 00:32:23.117 "seek_hole": false, 00:32:23.117 "seek_data": false, 00:32:23.117 "copy": true, 00:32:23.117 "nvme_iov_md": false 00:32:23.117 }, 00:32:23.117 "memory_domains": [ 00:32:23.117 { 00:32:23.117 "dma_device_id": "system", 00:32:23.117 "dma_device_type": 1 00:32:23.117 }, 00:32:23.117 { 00:32:23.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:23.117 "dma_device_type": 2 00:32:23.117 } 00:32:23.117 ], 00:32:23.117 "driver_specific": {} 00:32:23.117 } 00:32:23.117 ] 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.117 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:23.375 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:23.375 "name": "Existed_Raid", 00:32:23.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.375 "strip_size_kb": 64, 00:32:23.375 "state": "configuring", 00:32:23.375 "raid_level": "raid5f", 00:32:23.375 "superblock": false, 00:32:23.375 "num_base_bdevs": 3, 00:32:23.375 "num_base_bdevs_discovered": 2, 00:32:23.375 "num_base_bdevs_operational": 3, 00:32:23.375 "base_bdevs_list": [ 00:32:23.375 { 00:32:23.375 "name": "BaseBdev1", 00:32:23.375 "uuid": "1251cc00-5481-4212-a7e3-f47ec0b7b43a", 00:32:23.375 "is_configured": true, 00:32:23.375 "data_offset": 0, 00:32:23.375 "data_size": 65536 00:32:23.375 }, 00:32:23.375 { 00:32:23.375 "name": null, 00:32:23.375 "uuid": "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa", 00:32:23.375 "is_configured": false, 00:32:23.375 "data_offset": 0, 00:32:23.375 "data_size": 65536 00:32:23.375 }, 00:32:23.375 { 00:32:23.375 "name": "BaseBdev3", 00:32:23.375 "uuid": "9d946fe3-4a11-4bbf-821c-6786e8264895", 00:32:23.375 "is_configured": true, 00:32:23.375 "data_offset": 0, 00:32:23.375 "data_size": 65536 00:32:23.375 } 00:32:23.375 ] 00:32:23.375 }' 00:32:23.375 00:58:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:23.375 00:58:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.942 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.942 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:24.200 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:32:24.200 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:32:24.458 [2024-07-25 00:58:46.891559] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:24.458 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:24.458 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:24.458 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:24.458 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:24.458 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:24.458 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:24.459 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:24.459 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:24.459 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:24.459 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:24.459 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.459 00:58:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:24.459 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:24.459 "name": "Existed_Raid", 00:32:24.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.459 "strip_size_kb": 64, 00:32:24.459 "state": "configuring", 00:32:24.459 "raid_level": "raid5f", 00:32:24.459 "superblock": false, 00:32:24.459 "num_base_bdevs": 3, 00:32:24.459 "num_base_bdevs_discovered": 1, 00:32:24.459 "num_base_bdevs_operational": 3, 00:32:24.459 "base_bdevs_list": [ 00:32:24.459 { 00:32:24.459 "name": "BaseBdev1", 00:32:24.459 "uuid": "1251cc00-5481-4212-a7e3-f47ec0b7b43a", 00:32:24.459 "is_configured": true, 00:32:24.459 "data_offset": 0, 00:32:24.459 "data_size": 65536 00:32:24.459 }, 00:32:24.459 { 00:32:24.459 "name": null, 00:32:24.459 "uuid": "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa", 00:32:24.459 "is_configured": false, 00:32:24.459 "data_offset": 0, 00:32:24.459 "data_size": 65536 00:32:24.459 }, 00:32:24.459 { 00:32:24.459 "name": null, 00:32:24.459 "uuid": "9d946fe3-4a11-4bbf-821c-6786e8264895", 00:32:24.459 "is_configured": false, 00:32:24.459 "data_offset": 0, 00:32:24.459 "data_size": 65536 00:32:24.459 } 00:32:24.459 ] 00:32:24.459 }' 00:32:24.459 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:24.459 00:58:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.026 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.026 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:25.285 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:32:25.285 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:25.285 [2024-07-25 00:58:47.923743] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.544 00:58:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:25.544 00:58:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:25.544 "name": "Existed_Raid", 00:32:25.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.544 "strip_size_kb": 64, 00:32:25.544 "state": "configuring", 00:32:25.544 "raid_level": "raid5f", 00:32:25.544 "superblock": false, 00:32:25.544 "num_base_bdevs": 3, 00:32:25.544 "num_base_bdevs_discovered": 2, 00:32:25.544 "num_base_bdevs_operational": 3, 00:32:25.544 "base_bdevs_list": [ 00:32:25.544 { 00:32:25.544 "name": "BaseBdev1", 00:32:25.544 "uuid": "1251cc00-5481-4212-a7e3-f47ec0b7b43a", 00:32:25.544 "is_configured": true, 00:32:25.544 "data_offset": 0, 00:32:25.544 "data_size": 65536 00:32:25.544 }, 00:32:25.544 { 00:32:25.544 "name": null, 00:32:25.544 "uuid": "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa", 00:32:25.544 "is_configured": false, 00:32:25.544 "data_offset": 0, 00:32:25.544 "data_size": 65536 00:32:25.544 }, 00:32:25.544 { 00:32:25.544 "name": "BaseBdev3", 00:32:25.544 "uuid": "9d946fe3-4a11-4bbf-821c-6786e8264895", 00:32:25.544 "is_configured": true, 00:32:25.544 "data_offset": 0, 00:32:25.544 "data_size": 65536 00:32:25.544 } 00:32:25.544 ] 00:32:25.544 }' 00:32:25.544 00:58:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:25.544 00:58:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.124 00:58:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.124 00:58:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:26.414 00:58:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:32:26.414 00:58:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:26.414 [2024-07-25 00:58:48.952003] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:26.672 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:26.672 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:26.672 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:26.672 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:26.672 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:26.672 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:26.672 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:26.672 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:26.673 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:26.673 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:26.673 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.673 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:26.673 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:26.673 "name": "Existed_Raid", 00:32:26.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.673 "strip_size_kb": 64, 00:32:26.673 "state": "configuring", 00:32:26.673 "raid_level": "raid5f", 00:32:26.673 "superblock": false, 00:32:26.673 "num_base_bdevs": 3, 00:32:26.673 "num_base_bdevs_discovered": 1, 00:32:26.673 "num_base_bdevs_operational": 3, 00:32:26.673 "base_bdevs_list": [ 00:32:26.673 { 00:32:26.673 "name": null, 00:32:26.673 "uuid": "1251cc00-5481-4212-a7e3-f47ec0b7b43a", 00:32:26.673 "is_configured": false, 00:32:26.673 "data_offset": 0, 00:32:26.673 "data_size": 65536 00:32:26.673 }, 00:32:26.673 { 00:32:26.673 "name": null, 00:32:26.673 "uuid": "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa", 00:32:26.673 "is_configured": false, 00:32:26.673 "data_offset": 0, 00:32:26.673 "data_size": 65536 00:32:26.673 }, 00:32:26.673 { 00:32:26.673 "name": "BaseBdev3", 00:32:26.673 "uuid": "9d946fe3-4a11-4bbf-821c-6786e8264895", 00:32:26.673 "is_configured": true, 00:32:26.673 "data_offset": 0, 00:32:26.673 "data_size": 65536 00:32:26.673 } 00:32:26.673 ] 00:32:26.673 }' 00:32:26.673 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:26.673 00:58:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.240 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.240 00:58:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:27.498 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:32:27.498 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:27.757 [2024-07-25 00:58:50.222126] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.757 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:28.016 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:28.016 "name": "Existed_Raid", 00:32:28.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.016 "strip_size_kb": 64, 00:32:28.016 "state": "configuring", 00:32:28.016 "raid_level": "raid5f", 00:32:28.016 "superblock": false, 00:32:28.016 "num_base_bdevs": 3, 00:32:28.016 "num_base_bdevs_discovered": 2, 00:32:28.016 "num_base_bdevs_operational": 3, 00:32:28.016 "base_bdevs_list": [ 00:32:28.016 { 00:32:28.016 "name": null, 00:32:28.016 "uuid": "1251cc00-5481-4212-a7e3-f47ec0b7b43a", 00:32:28.016 "is_configured": false, 00:32:28.016 "data_offset": 0, 00:32:28.016 "data_size": 65536 00:32:28.016 }, 00:32:28.016 { 00:32:28.016 "name": "BaseBdev2", 00:32:28.016 "uuid": "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa", 00:32:28.016 "is_configured": true, 00:32:28.016 "data_offset": 0, 00:32:28.016 "data_size": 65536 00:32:28.016 }, 00:32:28.016 { 00:32:28.016 "name": "BaseBdev3", 00:32:28.016 "uuid": "9d946fe3-4a11-4bbf-821c-6786e8264895", 00:32:28.016 "is_configured": true, 00:32:28.016 "data_offset": 0, 00:32:28.016 "data_size": 65536 00:32:28.016 } 00:32:28.016 ] 00:32:28.016 }' 00:32:28.016 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:28.016 00:58:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.584 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.584 00:58:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:28.884 00:58:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:32:28.884 00:58:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.884 00:58:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:28.884 00:58:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1251cc00-5481-4212-a7e3-f47ec0b7b43a 00:32:29.143 [2024-07-25 00:58:51.699633] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:29.143 [2024-07-25 00:58:51.699687] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:32:29.143 [2024-07-25 00:58:51.699713] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:32:29.143 [2024-07-25 00:58:51.699815] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:29.143 [2024-07-25 00:58:51.704766] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:32:29.143 [2024-07-25 00:58:51.704791] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:32:29.143 [2024-07-25 00:58:51.705056] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:29.143 NewBaseBdev 00:32:29.143 00:58:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:32:29.143 00:58:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:32:29.143 00:58:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:29.143 00:58:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:29.143 00:58:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:29.143 00:58:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:29.143 00:58:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:29.402 00:58:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:29.661 [ 00:32:29.661 { 00:32:29.661 "name": "NewBaseBdev", 00:32:29.661 "aliases": [ 00:32:29.661 "1251cc00-5481-4212-a7e3-f47ec0b7b43a" 00:32:29.661 ], 00:32:29.661 "product_name": "Malloc disk", 00:32:29.661 "block_size": 512, 00:32:29.661 "num_blocks": 65536, 00:32:29.661 "uuid": "1251cc00-5481-4212-a7e3-f47ec0b7b43a", 00:32:29.661 "assigned_rate_limits": { 00:32:29.661 "rw_ios_per_sec": 0, 00:32:29.661 "rw_mbytes_per_sec": 0, 00:32:29.661 "r_mbytes_per_sec": 0, 00:32:29.661 "w_mbytes_per_sec": 0 00:32:29.661 }, 00:32:29.661 "claimed": true, 00:32:29.661 "claim_type": "exclusive_write", 00:32:29.661 "zoned": false, 00:32:29.661 "supported_io_types": { 00:32:29.661 "read": true, 00:32:29.661 "write": true, 00:32:29.661 "unmap": true, 00:32:29.661 "flush": true, 00:32:29.661 "reset": true, 00:32:29.661 "nvme_admin": false, 00:32:29.661 "nvme_io": false, 00:32:29.661 "nvme_io_md": false, 00:32:29.661 "write_zeroes": true, 00:32:29.661 "zcopy": true, 00:32:29.661 "get_zone_info": false, 00:32:29.661 "zone_management": false, 00:32:29.661 "zone_append": false, 00:32:29.661 "compare": false, 00:32:29.661 "compare_and_write": false, 00:32:29.661 "abort": true, 00:32:29.661 "seek_hole": false, 00:32:29.661 "seek_data": false, 00:32:29.661 "copy": true, 00:32:29.661 "nvme_iov_md": false 00:32:29.661 }, 00:32:29.661 "memory_domains": [ 00:32:29.661 { 00:32:29.661 "dma_device_id": "system", 00:32:29.661 "dma_device_type": 1 00:32:29.661 }, 00:32:29.661 { 00:32:29.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.661 "dma_device_type": 2 00:32:29.661 } 00:32:29.661 ], 00:32:29.661 "driver_specific": {} 00:32:29.661 } 00:32:29.661 ] 00:32:29.661 00:58:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:29.661 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:29.661 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:29.661 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:29.661 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:29.661 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:29.661 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:29.661 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:29.662 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:29.662 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:29.662 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:29.662 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.662 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:29.921 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:29.921 "name": "Existed_Raid", 00:32:29.921 "uuid": "95772ab2-d2c6-42e7-8601-4412fc4f0bf6", 00:32:29.921 "strip_size_kb": 64, 00:32:29.921 "state": "online", 00:32:29.921 "raid_level": "raid5f", 00:32:29.921 "superblock": false, 00:32:29.921 "num_base_bdevs": 3, 00:32:29.921 "num_base_bdevs_discovered": 3, 00:32:29.921 "num_base_bdevs_operational": 3, 00:32:29.921 "base_bdevs_list": [ 00:32:29.921 { 00:32:29.921 "name": "NewBaseBdev", 00:32:29.921 "uuid": "1251cc00-5481-4212-a7e3-f47ec0b7b43a", 00:32:29.921 "is_configured": true, 00:32:29.921 "data_offset": 0, 00:32:29.921 "data_size": 65536 00:32:29.921 }, 00:32:29.921 { 00:32:29.921 "name": "BaseBdev2", 00:32:29.921 "uuid": "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa", 00:32:29.921 "is_configured": true, 00:32:29.921 "data_offset": 0, 00:32:29.921 "data_size": 65536 00:32:29.921 }, 00:32:29.921 { 00:32:29.921 "name": "BaseBdev3", 00:32:29.921 "uuid": "9d946fe3-4a11-4bbf-821c-6786e8264895", 00:32:29.921 "is_configured": true, 00:32:29.921 "data_offset": 0, 00:32:29.921 "data_size": 65536 00:32:29.921 } 00:32:29.921 ] 00:32:29.921 }' 00:32:29.921 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:29.921 00:58:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.489 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:32:30.489 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:30.489 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:30.489 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:30.489 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:30.489 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:30.489 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:30.489 00:58:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:30.747 [2024-07-25 00:58:53.176009] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:30.747 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:30.747 "name": "Existed_Raid", 00:32:30.747 "aliases": [ 00:32:30.747 "95772ab2-d2c6-42e7-8601-4412fc4f0bf6" 00:32:30.747 ], 00:32:30.747 "product_name": "Raid Volume", 00:32:30.747 "block_size": 512, 00:32:30.747 "num_blocks": 131072, 00:32:30.747 "uuid": "95772ab2-d2c6-42e7-8601-4412fc4f0bf6", 00:32:30.747 "assigned_rate_limits": { 00:32:30.747 "rw_ios_per_sec": 0, 00:32:30.747 "rw_mbytes_per_sec": 0, 00:32:30.747 "r_mbytes_per_sec": 0, 00:32:30.747 "w_mbytes_per_sec": 0 00:32:30.747 }, 00:32:30.747 "claimed": false, 00:32:30.747 "zoned": false, 00:32:30.747 "supported_io_types": { 00:32:30.747 "read": true, 00:32:30.747 "write": true, 00:32:30.747 "unmap": false, 00:32:30.747 "flush": false, 00:32:30.747 "reset": true, 00:32:30.747 "nvme_admin": false, 00:32:30.747 "nvme_io": false, 00:32:30.747 "nvme_io_md": false, 00:32:30.747 "write_zeroes": true, 00:32:30.747 "zcopy": false, 00:32:30.747 "get_zone_info": false, 00:32:30.747 "zone_management": false, 00:32:30.747 "zone_append": false, 00:32:30.747 "compare": false, 00:32:30.747 "compare_and_write": false, 00:32:30.747 "abort": false, 00:32:30.747 "seek_hole": false, 00:32:30.747 "seek_data": false, 00:32:30.747 "copy": false, 00:32:30.747 "nvme_iov_md": false 00:32:30.747 }, 00:32:30.747 "driver_specific": { 00:32:30.747 "raid": { 00:32:30.747 "uuid": "95772ab2-d2c6-42e7-8601-4412fc4f0bf6", 00:32:30.747 "strip_size_kb": 64, 00:32:30.747 "state": "online", 00:32:30.747 "raid_level": "raid5f", 00:32:30.747 "superblock": false, 00:32:30.747 "num_base_bdevs": 3, 00:32:30.747 "num_base_bdevs_discovered": 3, 00:32:30.747 "num_base_bdevs_operational": 3, 00:32:30.747 "base_bdevs_list": [ 00:32:30.747 { 00:32:30.747 "name": "NewBaseBdev", 00:32:30.747 "uuid": "1251cc00-5481-4212-a7e3-f47ec0b7b43a", 00:32:30.748 "is_configured": true, 00:32:30.748 "data_offset": 0, 00:32:30.748 "data_size": 65536 00:32:30.748 }, 00:32:30.748 { 00:32:30.748 "name": "BaseBdev2", 00:32:30.748 "uuid": "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa", 00:32:30.748 "is_configured": true, 00:32:30.748 "data_offset": 0, 00:32:30.748 "data_size": 65536 00:32:30.748 }, 00:32:30.748 { 00:32:30.748 "name": "BaseBdev3", 00:32:30.748 "uuid": "9d946fe3-4a11-4bbf-821c-6786e8264895", 00:32:30.748 "is_configured": true, 00:32:30.748 "data_offset": 0, 00:32:30.748 "data_size": 65536 00:32:30.748 } 00:32:30.748 ] 00:32:30.748 } 00:32:30.748 } 00:32:30.748 }' 00:32:30.748 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:30.748 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:32:30.748 BaseBdev2 00:32:30.748 BaseBdev3' 00:32:30.748 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:30.748 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:32:30.748 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:31.006 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:31.006 "name": "NewBaseBdev", 00:32:31.006 "aliases": [ 00:32:31.006 "1251cc00-5481-4212-a7e3-f47ec0b7b43a" 00:32:31.006 ], 00:32:31.006 "product_name": "Malloc disk", 00:32:31.006 "block_size": 512, 00:32:31.006 "num_blocks": 65536, 00:32:31.006 "uuid": "1251cc00-5481-4212-a7e3-f47ec0b7b43a", 00:32:31.006 "assigned_rate_limits": { 00:32:31.006 "rw_ios_per_sec": 0, 00:32:31.006 "rw_mbytes_per_sec": 0, 00:32:31.006 "r_mbytes_per_sec": 0, 00:32:31.006 "w_mbytes_per_sec": 0 00:32:31.006 }, 00:32:31.006 "claimed": true, 00:32:31.006 "claim_type": "exclusive_write", 00:32:31.006 "zoned": false, 00:32:31.006 "supported_io_types": { 00:32:31.006 "read": true, 00:32:31.006 "write": true, 00:32:31.006 "unmap": true, 00:32:31.006 "flush": true, 00:32:31.006 "reset": true, 00:32:31.006 "nvme_admin": false, 00:32:31.006 "nvme_io": false, 00:32:31.006 "nvme_io_md": false, 00:32:31.006 "write_zeroes": true, 00:32:31.006 "zcopy": true, 00:32:31.006 "get_zone_info": false, 00:32:31.006 "zone_management": false, 00:32:31.006 "zone_append": false, 00:32:31.006 "compare": false, 00:32:31.006 "compare_and_write": false, 00:32:31.006 "abort": true, 00:32:31.006 "seek_hole": false, 00:32:31.006 "seek_data": false, 00:32:31.006 "copy": true, 00:32:31.006 "nvme_iov_md": false 00:32:31.006 }, 00:32:31.006 "memory_domains": [ 00:32:31.006 { 00:32:31.006 "dma_device_id": "system", 00:32:31.006 "dma_device_type": 1 00:32:31.006 }, 00:32:31.006 { 00:32:31.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.006 "dma_device_type": 2 00:32:31.006 } 00:32:31.006 ], 00:32:31.006 "driver_specific": {} 00:32:31.006 }' 00:32:31.006 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.006 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.006 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:31.006 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.006 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.006 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:31.006 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.265 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.265 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:31.265 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:31.265 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:31.265 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:31.265 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:31.265 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:31.265 00:58:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:31.523 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:31.523 "name": "BaseBdev2", 00:32:31.523 "aliases": [ 00:32:31.523 "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa" 00:32:31.523 ], 00:32:31.523 "product_name": "Malloc disk", 00:32:31.523 "block_size": 512, 00:32:31.523 "num_blocks": 65536, 00:32:31.523 "uuid": "1c7a7713-f7b9-47bd-8173-fcf1e6c838fa", 00:32:31.523 "assigned_rate_limits": { 00:32:31.523 "rw_ios_per_sec": 0, 00:32:31.523 "rw_mbytes_per_sec": 0, 00:32:31.523 "r_mbytes_per_sec": 0, 00:32:31.523 "w_mbytes_per_sec": 0 00:32:31.523 }, 00:32:31.523 "claimed": true, 00:32:31.523 "claim_type": "exclusive_write", 00:32:31.523 "zoned": false, 00:32:31.523 "supported_io_types": { 00:32:31.523 "read": true, 00:32:31.523 "write": true, 00:32:31.523 "unmap": true, 00:32:31.523 "flush": true, 00:32:31.523 "reset": true, 00:32:31.523 "nvme_admin": false, 00:32:31.523 "nvme_io": false, 00:32:31.523 "nvme_io_md": false, 00:32:31.523 "write_zeroes": true, 00:32:31.523 "zcopy": true, 00:32:31.523 "get_zone_info": false, 00:32:31.523 "zone_management": false, 00:32:31.523 "zone_append": false, 00:32:31.523 "compare": false, 00:32:31.523 "compare_and_write": false, 00:32:31.523 "abort": true, 00:32:31.523 "seek_hole": false, 00:32:31.523 "seek_data": false, 00:32:31.523 "copy": true, 00:32:31.523 "nvme_iov_md": false 00:32:31.523 }, 00:32:31.523 "memory_domains": [ 00:32:31.523 { 00:32:31.523 "dma_device_id": "system", 00:32:31.523 "dma_device_type": 1 00:32:31.523 }, 00:32:31.523 { 00:32:31.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.523 "dma_device_type": 2 00:32:31.523 } 00:32:31.523 ], 00:32:31.523 "driver_specific": {} 00:32:31.523 }' 00:32:31.523 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.523 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.523 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:31.523 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.523 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.523 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:31.523 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.781 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.781 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:31.781 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:31.781 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:31.781 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:31.781 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:31.781 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:31.781 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:32.040 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:32.040 "name": "BaseBdev3", 00:32:32.040 "aliases": [ 00:32:32.040 "9d946fe3-4a11-4bbf-821c-6786e8264895" 00:32:32.040 ], 00:32:32.040 "product_name": "Malloc disk", 00:32:32.040 "block_size": 512, 00:32:32.040 "num_blocks": 65536, 00:32:32.040 "uuid": "9d946fe3-4a11-4bbf-821c-6786e8264895", 00:32:32.040 "assigned_rate_limits": { 00:32:32.040 "rw_ios_per_sec": 0, 00:32:32.040 "rw_mbytes_per_sec": 0, 00:32:32.040 "r_mbytes_per_sec": 0, 00:32:32.040 "w_mbytes_per_sec": 0 00:32:32.040 }, 00:32:32.040 "claimed": true, 00:32:32.040 "claim_type": "exclusive_write", 00:32:32.040 "zoned": false, 00:32:32.040 "supported_io_types": { 00:32:32.040 "read": true, 00:32:32.040 "write": true, 00:32:32.040 "unmap": true, 00:32:32.040 "flush": true, 00:32:32.040 "reset": true, 00:32:32.040 "nvme_admin": false, 00:32:32.040 "nvme_io": false, 00:32:32.040 "nvme_io_md": false, 00:32:32.040 "write_zeroes": true, 00:32:32.040 "zcopy": true, 00:32:32.040 "get_zone_info": false, 00:32:32.040 "zone_management": false, 00:32:32.040 "zone_append": false, 00:32:32.040 "compare": false, 00:32:32.040 "compare_and_write": false, 00:32:32.040 "abort": true, 00:32:32.040 "seek_hole": false, 00:32:32.040 "seek_data": false, 00:32:32.040 "copy": true, 00:32:32.040 "nvme_iov_md": false 00:32:32.040 }, 00:32:32.040 "memory_domains": [ 00:32:32.040 { 00:32:32.040 "dma_device_id": "system", 00:32:32.040 "dma_device_type": 1 00:32:32.040 }, 00:32:32.040 { 00:32:32.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:32.040 "dma_device_type": 2 00:32:32.040 } 00:32:32.040 ], 00:32:32.040 "driver_specific": {} 00:32:32.040 }' 00:32:32.040 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:32.040 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:32.040 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:32.299 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:32.299 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:32.300 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:32.300 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:32.300 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:32.300 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:32.300 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:32.300 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:32.558 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:32.558 00:58:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:32.558 [2024-07-25 00:58:55.124197] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:32.558 [2024-07-25 00:58:55.124354] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:32.558 [2024-07-25 00:58:55.124575] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:32.558 [2024-07-25 00:58:55.124914] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:32.558 [2024-07-25 00:58:55.124993] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:32:32.559 00:58:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 150401 00:32:32.559 00:58:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 150401 ']' 00:32:32.559 00:58:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 150401 00:32:32.559 00:58:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:32:32.559 00:58:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.559 00:58:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 150401 00:32:32.559 00:58:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:32.559 00:58:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:32.559 00:58:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 150401' 00:32:32.559 killing process with pid 150401 00:32:32.559 00:58:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 150401 00:32:32.559 [2024-07-25 00:58:55.177713] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:32.559 00:58:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 150401 00:32:33.127 [2024-07-25 00:58:55.480933] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:34.582 ************************************ 00:32:34.583 END TEST raid5f_state_function_test 00:32:34.583 ************************************ 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:32:34.583 00:32:34.583 real 0m27.597s 00:32:34.583 user 0m49.251s 00:32:34.583 sys 0m4.353s 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.583 00:58:56 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:32:34.583 00:58:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:32:34.583 00:58:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.583 00:58:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:34.583 ************************************ 00:32:34.583 START TEST raid5f_state_function_test_sb 00:32:34.583 ************************************ 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 true 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=151343 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 151343' 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:34.583 Process raid pid: 151343 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 151343 /var/tmp/spdk-raid.sock 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 151343 ']' 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:34.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:34.583 00:58:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.583 [2024-07-25 00:58:56.968030] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:32:34.583 [2024-07-25 00:58:56.968438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.583 [2024-07-25 00:58:57.152991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.842 [2024-07-25 00:58:57.388116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.101 [2024-07-25 00:58:57.586969] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:35.360 00:58:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:35.360 00:58:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:32:35.360 00:58:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:35.620 [2024-07-25 00:58:58.069935] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:35.620 [2024-07-25 00:58:58.070152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:35.620 [2024-07-25 00:58:58.070272] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:35.620 [2024-07-25 00:58:58.070335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:35.620 [2024-07-25 00:58:58.070422] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:35.620 [2024-07-25 00:58:58.070470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.620 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:35.880 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:35.880 "name": "Existed_Raid", 00:32:35.880 "uuid": "77192851-4801-4768-bfd5-df170465d6ab", 00:32:35.880 "strip_size_kb": 64, 00:32:35.880 "state": "configuring", 00:32:35.880 "raid_level": "raid5f", 00:32:35.880 "superblock": true, 00:32:35.880 "num_base_bdevs": 3, 00:32:35.880 "num_base_bdevs_discovered": 0, 00:32:35.880 "num_base_bdevs_operational": 3, 00:32:35.880 "base_bdevs_list": [ 00:32:35.880 { 00:32:35.880 "name": "BaseBdev1", 00:32:35.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.880 "is_configured": false, 00:32:35.880 "data_offset": 0, 00:32:35.880 "data_size": 0 00:32:35.880 }, 00:32:35.880 { 00:32:35.880 "name": "BaseBdev2", 00:32:35.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.880 "is_configured": false, 00:32:35.880 "data_offset": 0, 00:32:35.880 "data_size": 0 00:32:35.880 }, 00:32:35.880 { 00:32:35.880 "name": "BaseBdev3", 00:32:35.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.880 "is_configured": false, 00:32:35.880 "data_offset": 0, 00:32:35.880 "data_size": 0 00:32:35.880 } 00:32:35.880 ] 00:32:35.880 }' 00:32:35.880 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:35.880 00:58:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.448 00:58:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:36.448 [2024-07-25 00:58:59.061961] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:36.448 [2024-07-25 00:58:59.062133] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:32:36.448 00:58:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:36.707 [2024-07-25 00:58:59.330063] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:36.707 [2024-07-25 00:58:59.330281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:36.707 [2024-07-25 00:58:59.330380] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:36.707 [2024-07-25 00:58:59.330465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:36.707 [2024-07-25 00:58:59.330575] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:36.707 [2024-07-25 00:58:59.330626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:36.707 00:58:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:37.276 [2024-07-25 00:58:59.619525] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:37.276 BaseBdev1 00:32:37.276 00:58:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:37.276 00:58:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:37.276 00:58:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:37.276 00:58:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:37.276 00:58:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:37.276 00:58:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:37.276 00:58:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:37.276 00:58:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:37.535 [ 00:32:37.535 { 00:32:37.535 "name": "BaseBdev1", 00:32:37.535 "aliases": [ 00:32:37.535 "9d4fb577-5d61-4c6a-a642-d7d91d75769c" 00:32:37.535 ], 00:32:37.535 "product_name": "Malloc disk", 00:32:37.535 "block_size": 512, 00:32:37.535 "num_blocks": 65536, 00:32:37.535 "uuid": "9d4fb577-5d61-4c6a-a642-d7d91d75769c", 00:32:37.535 "assigned_rate_limits": { 00:32:37.535 "rw_ios_per_sec": 0, 00:32:37.535 "rw_mbytes_per_sec": 0, 00:32:37.535 "r_mbytes_per_sec": 0, 00:32:37.535 "w_mbytes_per_sec": 0 00:32:37.535 }, 00:32:37.535 "claimed": true, 00:32:37.535 "claim_type": "exclusive_write", 00:32:37.535 "zoned": false, 00:32:37.535 "supported_io_types": { 00:32:37.535 "read": true, 00:32:37.535 "write": true, 00:32:37.535 "unmap": true, 00:32:37.535 "flush": true, 00:32:37.535 "reset": true, 00:32:37.535 "nvme_admin": false, 00:32:37.535 "nvme_io": false, 00:32:37.535 "nvme_io_md": false, 00:32:37.535 "write_zeroes": true, 00:32:37.535 "zcopy": true, 00:32:37.535 "get_zone_info": false, 00:32:37.535 "zone_management": false, 00:32:37.535 "zone_append": false, 00:32:37.535 "compare": false, 00:32:37.535 "compare_and_write": false, 00:32:37.535 "abort": true, 00:32:37.535 "seek_hole": false, 00:32:37.535 "seek_data": false, 00:32:37.535 "copy": true, 00:32:37.535 "nvme_iov_md": false 00:32:37.535 }, 00:32:37.535 "memory_domains": [ 00:32:37.535 { 00:32:37.535 "dma_device_id": "system", 00:32:37.535 "dma_device_type": 1 00:32:37.535 }, 00:32:37.535 { 00:32:37.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.535 "dma_device_type": 2 00:32:37.535 } 00:32:37.535 ], 00:32:37.535 "driver_specific": {} 00:32:37.535 } 00:32:37.535 ] 00:32:37.535 00:58:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:37.535 00:58:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:37.535 00:58:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:37.535 00:58:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:37.535 00:58:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:37.535 00:58:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:37.535 00:58:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:37.535 00:58:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:37.535 00:58:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:37.535 00:59:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:37.535 00:59:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:37.535 00:59:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.535 00:59:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:37.793 00:59:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:37.793 "name": "Existed_Raid", 00:32:37.793 "uuid": "22710223-9586-4022-8382-147d21386936", 00:32:37.794 "strip_size_kb": 64, 00:32:37.794 "state": "configuring", 00:32:37.794 "raid_level": "raid5f", 00:32:37.794 "superblock": true, 00:32:37.794 "num_base_bdevs": 3, 00:32:37.794 "num_base_bdevs_discovered": 1, 00:32:37.794 "num_base_bdevs_operational": 3, 00:32:37.794 "base_bdevs_list": [ 00:32:37.794 { 00:32:37.794 "name": "BaseBdev1", 00:32:37.794 "uuid": "9d4fb577-5d61-4c6a-a642-d7d91d75769c", 00:32:37.794 "is_configured": true, 00:32:37.794 "data_offset": 2048, 00:32:37.794 "data_size": 63488 00:32:37.794 }, 00:32:37.794 { 00:32:37.794 "name": "BaseBdev2", 00:32:37.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.794 "is_configured": false, 00:32:37.794 "data_offset": 0, 00:32:37.794 "data_size": 0 00:32:37.794 }, 00:32:37.794 { 00:32:37.794 "name": "BaseBdev3", 00:32:37.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.794 "is_configured": false, 00:32:37.794 "data_offset": 0, 00:32:37.794 "data_size": 0 00:32:37.794 } 00:32:37.794 ] 00:32:37.794 }' 00:32:37.794 00:59:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:37.794 00:59:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.361 00:59:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:38.361 [2024-07-25 00:59:00.963796] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:38.361 [2024-07-25 00:59:00.964057] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:32:38.361 00:59:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:38.620 [2024-07-25 00:59:01.207877] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:38.620 [2024-07-25 00:59:01.209983] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:38.620 [2024-07-25 00:59:01.210180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:38.620 [2024-07-25 00:59:01.210278] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:38.620 [2024-07-25 00:59:01.210402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.620 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.878 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:38.878 "name": "Existed_Raid", 00:32:38.878 "uuid": "05d8122a-7b91-42f4-89c1-9a5dcde934f5", 00:32:38.878 "strip_size_kb": 64, 00:32:38.878 "state": "configuring", 00:32:38.878 "raid_level": "raid5f", 00:32:38.878 "superblock": true, 00:32:38.878 "num_base_bdevs": 3, 00:32:38.878 "num_base_bdevs_discovered": 1, 00:32:38.878 "num_base_bdevs_operational": 3, 00:32:38.878 "base_bdevs_list": [ 00:32:38.878 { 00:32:38.878 "name": "BaseBdev1", 00:32:38.878 "uuid": "9d4fb577-5d61-4c6a-a642-d7d91d75769c", 00:32:38.878 "is_configured": true, 00:32:38.878 "data_offset": 2048, 00:32:38.878 "data_size": 63488 00:32:38.878 }, 00:32:38.878 { 00:32:38.878 "name": "BaseBdev2", 00:32:38.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.878 "is_configured": false, 00:32:38.878 "data_offset": 0, 00:32:38.878 "data_size": 0 00:32:38.878 }, 00:32:38.878 { 00:32:38.878 "name": "BaseBdev3", 00:32:38.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.878 "is_configured": false, 00:32:38.878 "data_offset": 0, 00:32:38.878 "data_size": 0 00:32:38.878 } 00:32:38.878 ] 00:32:38.878 }' 00:32:38.878 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:38.878 00:59:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.445 00:59:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:39.704 [2024-07-25 00:59:02.264169] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:39.704 BaseBdev2 00:32:39.704 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:39.704 00:59:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:32:39.704 00:59:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:39.704 00:59:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:39.704 00:59:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:39.704 00:59:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:39.704 00:59:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:39.963 00:59:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:40.222 [ 00:32:40.222 { 00:32:40.222 "name": "BaseBdev2", 00:32:40.222 "aliases": [ 00:32:40.222 "470d03ee-210f-40f8-a01b-9c33981db5b3" 00:32:40.222 ], 00:32:40.222 "product_name": "Malloc disk", 00:32:40.222 "block_size": 512, 00:32:40.222 "num_blocks": 65536, 00:32:40.222 "uuid": "470d03ee-210f-40f8-a01b-9c33981db5b3", 00:32:40.222 "assigned_rate_limits": { 00:32:40.222 "rw_ios_per_sec": 0, 00:32:40.222 "rw_mbytes_per_sec": 0, 00:32:40.222 "r_mbytes_per_sec": 0, 00:32:40.222 "w_mbytes_per_sec": 0 00:32:40.222 }, 00:32:40.222 "claimed": true, 00:32:40.222 "claim_type": "exclusive_write", 00:32:40.222 "zoned": false, 00:32:40.222 "supported_io_types": { 00:32:40.222 "read": true, 00:32:40.222 "write": true, 00:32:40.222 "unmap": true, 00:32:40.222 "flush": true, 00:32:40.222 "reset": true, 00:32:40.222 "nvme_admin": false, 00:32:40.222 "nvme_io": false, 00:32:40.222 "nvme_io_md": false, 00:32:40.222 "write_zeroes": true, 00:32:40.222 "zcopy": true, 00:32:40.222 "get_zone_info": false, 00:32:40.222 "zone_management": false, 00:32:40.222 "zone_append": false, 00:32:40.222 "compare": false, 00:32:40.222 "compare_and_write": false, 00:32:40.222 "abort": true, 00:32:40.222 "seek_hole": false, 00:32:40.222 "seek_data": false, 00:32:40.222 "copy": true, 00:32:40.222 "nvme_iov_md": false 00:32:40.222 }, 00:32:40.222 "memory_domains": [ 00:32:40.222 { 00:32:40.222 "dma_device_id": "system", 00:32:40.222 "dma_device_type": 1 00:32:40.222 }, 00:32:40.222 { 00:32:40.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.222 "dma_device_type": 2 00:32:40.222 } 00:32:40.222 ], 00:32:40.222 "driver_specific": {} 00:32:40.222 } 00:32:40.222 ] 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:40.222 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:40.482 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:40.482 "name": "Existed_Raid", 00:32:40.482 "uuid": "05d8122a-7b91-42f4-89c1-9a5dcde934f5", 00:32:40.482 "strip_size_kb": 64, 00:32:40.482 "state": "configuring", 00:32:40.482 "raid_level": "raid5f", 00:32:40.482 "superblock": true, 00:32:40.482 "num_base_bdevs": 3, 00:32:40.482 "num_base_bdevs_discovered": 2, 00:32:40.482 "num_base_bdevs_operational": 3, 00:32:40.482 "base_bdevs_list": [ 00:32:40.482 { 00:32:40.482 "name": "BaseBdev1", 00:32:40.482 "uuid": "9d4fb577-5d61-4c6a-a642-d7d91d75769c", 00:32:40.482 "is_configured": true, 00:32:40.482 "data_offset": 2048, 00:32:40.482 "data_size": 63488 00:32:40.482 }, 00:32:40.482 { 00:32:40.482 "name": "BaseBdev2", 00:32:40.482 "uuid": "470d03ee-210f-40f8-a01b-9c33981db5b3", 00:32:40.482 "is_configured": true, 00:32:40.482 "data_offset": 2048, 00:32:40.482 "data_size": 63488 00:32:40.482 }, 00:32:40.482 { 00:32:40.482 "name": "BaseBdev3", 00:32:40.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.482 "is_configured": false, 00:32:40.482 "data_offset": 0, 00:32:40.482 "data_size": 0 00:32:40.482 } 00:32:40.482 ] 00:32:40.482 }' 00:32:40.482 00:59:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:40.482 00:59:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.741 00:59:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:41.306 [2024-07-25 00:59:03.653719] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:41.306 [2024-07-25 00:59:03.654190] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:32:41.306 [2024-07-25 00:59:03.654329] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:41.306 [2024-07-25 00:59:03.654488] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:32:41.306 BaseBdev3 00:32:41.306 [2024-07-25 00:59:03.660361] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:32:41.306 [2024-07-25 00:59:03.660477] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:32:41.306 [2024-07-25 00:59:03.660793] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:41.306 00:59:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:32:41.306 00:59:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:32:41.306 00:59:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:41.306 00:59:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:41.306 00:59:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:41.306 00:59:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:41.306 00:59:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:41.306 00:59:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:41.565 [ 00:32:41.565 { 00:32:41.565 "name": "BaseBdev3", 00:32:41.565 "aliases": [ 00:32:41.565 "101de2fb-b3ff-44f1-aac9-1c99c0561cb5" 00:32:41.565 ], 00:32:41.565 "product_name": "Malloc disk", 00:32:41.565 "block_size": 512, 00:32:41.565 "num_blocks": 65536, 00:32:41.565 "uuid": "101de2fb-b3ff-44f1-aac9-1c99c0561cb5", 00:32:41.565 "assigned_rate_limits": { 00:32:41.565 "rw_ios_per_sec": 0, 00:32:41.565 "rw_mbytes_per_sec": 0, 00:32:41.565 "r_mbytes_per_sec": 0, 00:32:41.565 "w_mbytes_per_sec": 0 00:32:41.565 }, 00:32:41.565 "claimed": true, 00:32:41.565 "claim_type": "exclusive_write", 00:32:41.565 "zoned": false, 00:32:41.565 "supported_io_types": { 00:32:41.565 "read": true, 00:32:41.565 "write": true, 00:32:41.565 "unmap": true, 00:32:41.565 "flush": true, 00:32:41.565 "reset": true, 00:32:41.565 "nvme_admin": false, 00:32:41.565 "nvme_io": false, 00:32:41.565 "nvme_io_md": false, 00:32:41.565 "write_zeroes": true, 00:32:41.565 "zcopy": true, 00:32:41.565 "get_zone_info": false, 00:32:41.565 "zone_management": false, 00:32:41.565 "zone_append": false, 00:32:41.565 "compare": false, 00:32:41.565 "compare_and_write": false, 00:32:41.565 "abort": true, 00:32:41.565 "seek_hole": false, 00:32:41.565 "seek_data": false, 00:32:41.565 "copy": true, 00:32:41.565 "nvme_iov_md": false 00:32:41.565 }, 00:32:41.565 "memory_domains": [ 00:32:41.565 { 00:32:41.565 "dma_device_id": "system", 00:32:41.565 "dma_device_type": 1 00:32:41.565 }, 00:32:41.565 { 00:32:41.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.565 "dma_device_type": 2 00:32:41.565 } 00:32:41.565 ], 00:32:41.565 "driver_specific": {} 00:32:41.565 } 00:32:41.565 ] 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.565 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:41.824 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:41.824 "name": "Existed_Raid", 00:32:41.824 "uuid": "05d8122a-7b91-42f4-89c1-9a5dcde934f5", 00:32:41.824 "strip_size_kb": 64, 00:32:41.824 "state": "online", 00:32:41.824 "raid_level": "raid5f", 00:32:41.824 "superblock": true, 00:32:41.824 "num_base_bdevs": 3, 00:32:41.824 "num_base_bdevs_discovered": 3, 00:32:41.824 "num_base_bdevs_operational": 3, 00:32:41.824 "base_bdevs_list": [ 00:32:41.824 { 00:32:41.824 "name": "BaseBdev1", 00:32:41.824 "uuid": "9d4fb577-5d61-4c6a-a642-d7d91d75769c", 00:32:41.824 "is_configured": true, 00:32:41.824 "data_offset": 2048, 00:32:41.824 "data_size": 63488 00:32:41.824 }, 00:32:41.824 { 00:32:41.824 "name": "BaseBdev2", 00:32:41.824 "uuid": "470d03ee-210f-40f8-a01b-9c33981db5b3", 00:32:41.824 "is_configured": true, 00:32:41.824 "data_offset": 2048, 00:32:41.824 "data_size": 63488 00:32:41.824 }, 00:32:41.824 { 00:32:41.824 "name": "BaseBdev3", 00:32:41.824 "uuid": "101de2fb-b3ff-44f1-aac9-1c99c0561cb5", 00:32:41.824 "is_configured": true, 00:32:41.824 "data_offset": 2048, 00:32:41.824 "data_size": 63488 00:32:41.824 } 00:32:41.824 ] 00:32:41.824 }' 00:32:41.824 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:41.824 00:59:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.391 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:42.391 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:42.391 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:42.391 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:42.391 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:42.391 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:32:42.391 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:42.391 00:59:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:42.650 [2024-07-25 00:59:05.196316] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:42.650 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:42.650 "name": "Existed_Raid", 00:32:42.650 "aliases": [ 00:32:42.650 "05d8122a-7b91-42f4-89c1-9a5dcde934f5" 00:32:42.650 ], 00:32:42.650 "product_name": "Raid Volume", 00:32:42.650 "block_size": 512, 00:32:42.650 "num_blocks": 126976, 00:32:42.650 "uuid": "05d8122a-7b91-42f4-89c1-9a5dcde934f5", 00:32:42.650 "assigned_rate_limits": { 00:32:42.650 "rw_ios_per_sec": 0, 00:32:42.650 "rw_mbytes_per_sec": 0, 00:32:42.650 "r_mbytes_per_sec": 0, 00:32:42.650 "w_mbytes_per_sec": 0 00:32:42.650 }, 00:32:42.650 "claimed": false, 00:32:42.650 "zoned": false, 00:32:42.650 "supported_io_types": { 00:32:42.650 "read": true, 00:32:42.650 "write": true, 00:32:42.650 "unmap": false, 00:32:42.650 "flush": false, 00:32:42.650 "reset": true, 00:32:42.650 "nvme_admin": false, 00:32:42.650 "nvme_io": false, 00:32:42.650 "nvme_io_md": false, 00:32:42.650 "write_zeroes": true, 00:32:42.650 "zcopy": false, 00:32:42.650 "get_zone_info": false, 00:32:42.650 "zone_management": false, 00:32:42.650 "zone_append": false, 00:32:42.650 "compare": false, 00:32:42.650 "compare_and_write": false, 00:32:42.650 "abort": false, 00:32:42.650 "seek_hole": false, 00:32:42.650 "seek_data": false, 00:32:42.650 "copy": false, 00:32:42.650 "nvme_iov_md": false 00:32:42.650 }, 00:32:42.650 "driver_specific": { 00:32:42.650 "raid": { 00:32:42.650 "uuid": "05d8122a-7b91-42f4-89c1-9a5dcde934f5", 00:32:42.650 "strip_size_kb": 64, 00:32:42.650 "state": "online", 00:32:42.650 "raid_level": "raid5f", 00:32:42.650 "superblock": true, 00:32:42.650 "num_base_bdevs": 3, 00:32:42.650 "num_base_bdevs_discovered": 3, 00:32:42.650 "num_base_bdevs_operational": 3, 00:32:42.650 "base_bdevs_list": [ 00:32:42.650 { 00:32:42.650 "name": "BaseBdev1", 00:32:42.650 "uuid": "9d4fb577-5d61-4c6a-a642-d7d91d75769c", 00:32:42.650 "is_configured": true, 00:32:42.650 "data_offset": 2048, 00:32:42.650 "data_size": 63488 00:32:42.650 }, 00:32:42.650 { 00:32:42.650 "name": "BaseBdev2", 00:32:42.650 "uuid": "470d03ee-210f-40f8-a01b-9c33981db5b3", 00:32:42.650 "is_configured": true, 00:32:42.650 "data_offset": 2048, 00:32:42.650 "data_size": 63488 00:32:42.650 }, 00:32:42.650 { 00:32:42.650 "name": "BaseBdev3", 00:32:42.650 "uuid": "101de2fb-b3ff-44f1-aac9-1c99c0561cb5", 00:32:42.650 "is_configured": true, 00:32:42.650 "data_offset": 2048, 00:32:42.650 "data_size": 63488 00:32:42.650 } 00:32:42.650 ] 00:32:42.650 } 00:32:42.650 } 00:32:42.650 }' 00:32:42.650 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:42.650 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:42.650 BaseBdev2 00:32:42.650 BaseBdev3' 00:32:42.650 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:42.650 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:42.650 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:42.909 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:42.909 "name": "BaseBdev1", 00:32:42.909 "aliases": [ 00:32:42.909 "9d4fb577-5d61-4c6a-a642-d7d91d75769c" 00:32:42.909 ], 00:32:42.909 "product_name": "Malloc disk", 00:32:42.909 "block_size": 512, 00:32:42.909 "num_blocks": 65536, 00:32:42.909 "uuid": "9d4fb577-5d61-4c6a-a642-d7d91d75769c", 00:32:42.909 "assigned_rate_limits": { 00:32:42.909 "rw_ios_per_sec": 0, 00:32:42.909 "rw_mbytes_per_sec": 0, 00:32:42.909 "r_mbytes_per_sec": 0, 00:32:42.909 "w_mbytes_per_sec": 0 00:32:42.909 }, 00:32:42.909 "claimed": true, 00:32:42.909 "claim_type": "exclusive_write", 00:32:42.909 "zoned": false, 00:32:42.909 "supported_io_types": { 00:32:42.909 "read": true, 00:32:42.909 "write": true, 00:32:42.909 "unmap": true, 00:32:42.909 "flush": true, 00:32:42.909 "reset": true, 00:32:42.909 "nvme_admin": false, 00:32:42.909 "nvme_io": false, 00:32:42.909 "nvme_io_md": false, 00:32:42.909 "write_zeroes": true, 00:32:42.909 "zcopy": true, 00:32:42.909 "get_zone_info": false, 00:32:42.909 "zone_management": false, 00:32:42.909 "zone_append": false, 00:32:42.909 "compare": false, 00:32:42.909 "compare_and_write": false, 00:32:42.909 "abort": true, 00:32:42.909 "seek_hole": false, 00:32:42.909 "seek_data": false, 00:32:42.909 "copy": true, 00:32:42.909 "nvme_iov_md": false 00:32:42.909 }, 00:32:42.909 "memory_domains": [ 00:32:42.909 { 00:32:42.909 "dma_device_id": "system", 00:32:42.910 "dma_device_type": 1 00:32:42.910 }, 00:32:42.910 { 00:32:42.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:42.910 "dma_device_type": 2 00:32:42.910 } 00:32:42.910 ], 00:32:42.910 "driver_specific": {} 00:32:42.910 }' 00:32:42.910 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:42.910 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:42.910 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:42.910 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:43.167 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:43.167 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:43.167 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:43.168 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:43.168 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:43.168 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:43.168 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:43.425 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:43.425 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:43.425 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:43.425 00:59:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:43.684 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:43.684 "name": "BaseBdev2", 00:32:43.684 "aliases": [ 00:32:43.684 "470d03ee-210f-40f8-a01b-9c33981db5b3" 00:32:43.684 ], 00:32:43.684 "product_name": "Malloc disk", 00:32:43.684 "block_size": 512, 00:32:43.684 "num_blocks": 65536, 00:32:43.684 "uuid": "470d03ee-210f-40f8-a01b-9c33981db5b3", 00:32:43.684 "assigned_rate_limits": { 00:32:43.684 "rw_ios_per_sec": 0, 00:32:43.684 "rw_mbytes_per_sec": 0, 00:32:43.684 "r_mbytes_per_sec": 0, 00:32:43.684 "w_mbytes_per_sec": 0 00:32:43.684 }, 00:32:43.684 "claimed": true, 00:32:43.684 "claim_type": "exclusive_write", 00:32:43.684 "zoned": false, 00:32:43.684 "supported_io_types": { 00:32:43.684 "read": true, 00:32:43.684 "write": true, 00:32:43.684 "unmap": true, 00:32:43.684 "flush": true, 00:32:43.684 "reset": true, 00:32:43.684 "nvme_admin": false, 00:32:43.684 "nvme_io": false, 00:32:43.684 "nvme_io_md": false, 00:32:43.684 "write_zeroes": true, 00:32:43.684 "zcopy": true, 00:32:43.684 "get_zone_info": false, 00:32:43.684 "zone_management": false, 00:32:43.684 "zone_append": false, 00:32:43.684 "compare": false, 00:32:43.684 "compare_and_write": false, 00:32:43.684 "abort": true, 00:32:43.684 "seek_hole": false, 00:32:43.684 "seek_data": false, 00:32:43.684 "copy": true, 00:32:43.684 "nvme_iov_md": false 00:32:43.684 }, 00:32:43.684 "memory_domains": [ 00:32:43.684 { 00:32:43.684 "dma_device_id": "system", 00:32:43.684 "dma_device_type": 1 00:32:43.684 }, 00:32:43.684 { 00:32:43.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:43.684 "dma_device_type": 2 00:32:43.684 } 00:32:43.684 ], 00:32:43.684 "driver_specific": {} 00:32:43.684 }' 00:32:43.684 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:43.684 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:43.684 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:43.684 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:43.684 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:43.684 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:43.684 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:43.684 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:43.943 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:43.943 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:43.943 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:43.943 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:43.943 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:43.943 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:43.943 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:44.201 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:44.201 "name": "BaseBdev3", 00:32:44.201 "aliases": [ 00:32:44.201 "101de2fb-b3ff-44f1-aac9-1c99c0561cb5" 00:32:44.201 ], 00:32:44.201 "product_name": "Malloc disk", 00:32:44.201 "block_size": 512, 00:32:44.201 "num_blocks": 65536, 00:32:44.201 "uuid": "101de2fb-b3ff-44f1-aac9-1c99c0561cb5", 00:32:44.201 "assigned_rate_limits": { 00:32:44.201 "rw_ios_per_sec": 0, 00:32:44.201 "rw_mbytes_per_sec": 0, 00:32:44.201 "r_mbytes_per_sec": 0, 00:32:44.201 "w_mbytes_per_sec": 0 00:32:44.201 }, 00:32:44.202 "claimed": true, 00:32:44.202 "claim_type": "exclusive_write", 00:32:44.202 "zoned": false, 00:32:44.202 "supported_io_types": { 00:32:44.202 "read": true, 00:32:44.202 "write": true, 00:32:44.202 "unmap": true, 00:32:44.202 "flush": true, 00:32:44.202 "reset": true, 00:32:44.202 "nvme_admin": false, 00:32:44.202 "nvme_io": false, 00:32:44.202 "nvme_io_md": false, 00:32:44.202 "write_zeroes": true, 00:32:44.202 "zcopy": true, 00:32:44.202 "get_zone_info": false, 00:32:44.202 "zone_management": false, 00:32:44.202 "zone_append": false, 00:32:44.202 "compare": false, 00:32:44.202 "compare_and_write": false, 00:32:44.202 "abort": true, 00:32:44.202 "seek_hole": false, 00:32:44.202 "seek_data": false, 00:32:44.202 "copy": true, 00:32:44.202 "nvme_iov_md": false 00:32:44.202 }, 00:32:44.202 "memory_domains": [ 00:32:44.202 { 00:32:44.202 "dma_device_id": "system", 00:32:44.202 "dma_device_type": 1 00:32:44.202 }, 00:32:44.202 { 00:32:44.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:44.202 "dma_device_type": 2 00:32:44.202 } 00:32:44.202 ], 00:32:44.202 "driver_specific": {} 00:32:44.202 }' 00:32:44.202 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:44.202 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:44.202 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:44.202 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:44.202 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:44.202 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:44.202 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:44.202 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:44.486 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:44.486 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:44.486 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:44.486 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:44.486 00:59:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:44.758 [2024-07-25 00:59:07.124725] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:44.758 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:44.758 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:32:44.758 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:44.758 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:32:44.758 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:44.758 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:32:44.759 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:44.759 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:44.759 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:44.759 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:44.759 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:44.759 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:44.759 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:44.759 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:44.759 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:44.759 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.759 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:45.018 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:45.018 "name": "Existed_Raid", 00:32:45.018 "uuid": "05d8122a-7b91-42f4-89c1-9a5dcde934f5", 00:32:45.018 "strip_size_kb": 64, 00:32:45.018 "state": "online", 00:32:45.018 "raid_level": "raid5f", 00:32:45.018 "superblock": true, 00:32:45.018 "num_base_bdevs": 3, 00:32:45.018 "num_base_bdevs_discovered": 2, 00:32:45.018 "num_base_bdevs_operational": 2, 00:32:45.018 "base_bdevs_list": [ 00:32:45.018 { 00:32:45.018 "name": null, 00:32:45.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.018 "is_configured": false, 00:32:45.018 "data_offset": 2048, 00:32:45.018 "data_size": 63488 00:32:45.018 }, 00:32:45.018 { 00:32:45.018 "name": "BaseBdev2", 00:32:45.018 "uuid": "470d03ee-210f-40f8-a01b-9c33981db5b3", 00:32:45.018 "is_configured": true, 00:32:45.018 "data_offset": 2048, 00:32:45.018 "data_size": 63488 00:32:45.018 }, 00:32:45.018 { 00:32:45.018 "name": "BaseBdev3", 00:32:45.018 "uuid": "101de2fb-b3ff-44f1-aac9-1c99c0561cb5", 00:32:45.018 "is_configured": true, 00:32:45.018 "data_offset": 2048, 00:32:45.018 "data_size": 63488 00:32:45.018 } 00:32:45.018 ] 00:32:45.018 }' 00:32:45.018 00:59:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:45.018 00:59:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.586 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:45.586 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:45.586 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.586 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:45.845 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:45.845 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:45.845 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:45.845 [2024-07-25 00:59:08.464991] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:45.845 [2024-07-25 00:59:08.465293] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:46.104 [2024-07-25 00:59:08.565126] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:46.104 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:46.104 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:46.104 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.104 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:46.363 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:46.363 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:46.363 00:59:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:32:46.622 [2024-07-25 00:59:09.041236] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:46.622 [2024-07-25 00:59:09.041455] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:32:46.622 00:59:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:46.622 00:59:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:46.622 00:59:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.622 00:59:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:46.881 00:59:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:46.881 00:59:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:46.881 00:59:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:32:46.881 00:59:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:32:46.881 00:59:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:46.881 00:59:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:47.138 BaseBdev2 00:32:47.138 00:59:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:32:47.138 00:59:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:32:47.138 00:59:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:47.138 00:59:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:47.138 00:59:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:47.138 00:59:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:47.138 00:59:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:47.396 00:59:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:47.654 [ 00:32:47.654 { 00:32:47.654 "name": "BaseBdev2", 00:32:47.654 "aliases": [ 00:32:47.654 "a5685c07-e024-4493-a5d5-f2d53ecd89a2" 00:32:47.654 ], 00:32:47.654 "product_name": "Malloc disk", 00:32:47.654 "block_size": 512, 00:32:47.654 "num_blocks": 65536, 00:32:47.654 "uuid": "a5685c07-e024-4493-a5d5-f2d53ecd89a2", 00:32:47.654 "assigned_rate_limits": { 00:32:47.654 "rw_ios_per_sec": 0, 00:32:47.654 "rw_mbytes_per_sec": 0, 00:32:47.654 "r_mbytes_per_sec": 0, 00:32:47.654 "w_mbytes_per_sec": 0 00:32:47.654 }, 00:32:47.654 "claimed": false, 00:32:47.654 "zoned": false, 00:32:47.654 "supported_io_types": { 00:32:47.654 "read": true, 00:32:47.654 "write": true, 00:32:47.654 "unmap": true, 00:32:47.654 "flush": true, 00:32:47.654 "reset": true, 00:32:47.654 "nvme_admin": false, 00:32:47.654 "nvme_io": false, 00:32:47.654 "nvme_io_md": false, 00:32:47.654 "write_zeroes": true, 00:32:47.654 "zcopy": true, 00:32:47.654 "get_zone_info": false, 00:32:47.654 "zone_management": false, 00:32:47.654 "zone_append": false, 00:32:47.654 "compare": false, 00:32:47.654 "compare_and_write": false, 00:32:47.654 "abort": true, 00:32:47.654 "seek_hole": false, 00:32:47.654 "seek_data": false, 00:32:47.654 "copy": true, 00:32:47.654 "nvme_iov_md": false 00:32:47.654 }, 00:32:47.654 "memory_domains": [ 00:32:47.654 { 00:32:47.654 "dma_device_id": "system", 00:32:47.654 "dma_device_type": 1 00:32:47.654 }, 00:32:47.654 { 00:32:47.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:47.654 "dma_device_type": 2 00:32:47.654 } 00:32:47.654 ], 00:32:47.654 "driver_specific": {} 00:32:47.654 } 00:32:47.654 ] 00:32:47.654 00:59:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:47.654 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:47.654 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:47.654 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:47.654 BaseBdev3 00:32:47.654 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:32:47.654 00:59:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:32:47.654 00:59:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:47.654 00:59:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:47.654 00:59:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:47.654 00:59:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:47.654 00:59:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:47.912 00:59:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:48.169 [ 00:32:48.169 { 00:32:48.169 "name": "BaseBdev3", 00:32:48.169 "aliases": [ 00:32:48.169 "49772a74-cf50-4f0f-8413-c80a7934a793" 00:32:48.169 ], 00:32:48.169 "product_name": "Malloc disk", 00:32:48.169 "block_size": 512, 00:32:48.169 "num_blocks": 65536, 00:32:48.169 "uuid": "49772a74-cf50-4f0f-8413-c80a7934a793", 00:32:48.169 "assigned_rate_limits": { 00:32:48.169 "rw_ios_per_sec": 0, 00:32:48.169 "rw_mbytes_per_sec": 0, 00:32:48.169 "r_mbytes_per_sec": 0, 00:32:48.169 "w_mbytes_per_sec": 0 00:32:48.169 }, 00:32:48.169 "claimed": false, 00:32:48.169 "zoned": false, 00:32:48.169 "supported_io_types": { 00:32:48.169 "read": true, 00:32:48.169 "write": true, 00:32:48.169 "unmap": true, 00:32:48.169 "flush": true, 00:32:48.169 "reset": true, 00:32:48.169 "nvme_admin": false, 00:32:48.169 "nvme_io": false, 00:32:48.169 "nvme_io_md": false, 00:32:48.169 "write_zeroes": true, 00:32:48.169 "zcopy": true, 00:32:48.169 "get_zone_info": false, 00:32:48.169 "zone_management": false, 00:32:48.169 "zone_append": false, 00:32:48.169 "compare": false, 00:32:48.169 "compare_and_write": false, 00:32:48.169 "abort": true, 00:32:48.169 "seek_hole": false, 00:32:48.169 "seek_data": false, 00:32:48.169 "copy": true, 00:32:48.169 "nvme_iov_md": false 00:32:48.169 }, 00:32:48.169 "memory_domains": [ 00:32:48.169 { 00:32:48.169 "dma_device_id": "system", 00:32:48.169 "dma_device_type": 1 00:32:48.169 }, 00:32:48.169 { 00:32:48.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.169 "dma_device_type": 2 00:32:48.169 } 00:32:48.169 ], 00:32:48.169 "driver_specific": {} 00:32:48.169 } 00:32:48.169 ] 00:32:48.169 00:59:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:48.169 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:48.169 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:48.169 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:48.428 [2024-07-25 00:59:10.878950] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:48.428 [2024-07-25 00:59:10.879151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:48.428 [2024-07-25 00:59:10.879274] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:48.428 [2024-07-25 00:59:10.881261] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:48.428 00:59:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:48.687 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:48.687 "name": "Existed_Raid", 00:32:48.687 "uuid": "c63b1bfc-da3f-4237-a4de-c7ea8751e5ab", 00:32:48.687 "strip_size_kb": 64, 00:32:48.687 "state": "configuring", 00:32:48.687 "raid_level": "raid5f", 00:32:48.687 "superblock": true, 00:32:48.687 "num_base_bdevs": 3, 00:32:48.687 "num_base_bdevs_discovered": 2, 00:32:48.687 "num_base_bdevs_operational": 3, 00:32:48.687 "base_bdevs_list": [ 00:32:48.687 { 00:32:48.687 "name": "BaseBdev1", 00:32:48.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:48.687 "is_configured": false, 00:32:48.687 "data_offset": 0, 00:32:48.687 "data_size": 0 00:32:48.687 }, 00:32:48.687 { 00:32:48.687 "name": "BaseBdev2", 00:32:48.687 "uuid": "a5685c07-e024-4493-a5d5-f2d53ecd89a2", 00:32:48.687 "is_configured": true, 00:32:48.687 "data_offset": 2048, 00:32:48.687 "data_size": 63488 00:32:48.687 }, 00:32:48.687 { 00:32:48.687 "name": "BaseBdev3", 00:32:48.687 "uuid": "49772a74-cf50-4f0f-8413-c80a7934a793", 00:32:48.687 "is_configured": true, 00:32:48.687 "data_offset": 2048, 00:32:48.687 "data_size": 63488 00:32:48.687 } 00:32:48.687 ] 00:32:48.687 }' 00:32:48.687 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:48.687 00:59:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.254 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:49.513 [2024-07-25 00:59:11.915099] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:49.513 00:59:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.513 00:59:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:49.513 "name": "Existed_Raid", 00:32:49.513 "uuid": "c63b1bfc-da3f-4237-a4de-c7ea8751e5ab", 00:32:49.513 "strip_size_kb": 64, 00:32:49.513 "state": "configuring", 00:32:49.513 "raid_level": "raid5f", 00:32:49.513 "superblock": true, 00:32:49.513 "num_base_bdevs": 3, 00:32:49.513 "num_base_bdevs_discovered": 1, 00:32:49.513 "num_base_bdevs_operational": 3, 00:32:49.513 "base_bdevs_list": [ 00:32:49.513 { 00:32:49.513 "name": "BaseBdev1", 00:32:49.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.513 "is_configured": false, 00:32:49.513 "data_offset": 0, 00:32:49.513 "data_size": 0 00:32:49.513 }, 00:32:49.513 { 00:32:49.513 "name": null, 00:32:49.513 "uuid": "a5685c07-e024-4493-a5d5-f2d53ecd89a2", 00:32:49.513 "is_configured": false, 00:32:49.513 "data_offset": 2048, 00:32:49.513 "data_size": 63488 00:32:49.513 }, 00:32:49.513 { 00:32:49.513 "name": "BaseBdev3", 00:32:49.513 "uuid": "49772a74-cf50-4f0f-8413-c80a7934a793", 00:32:49.513 "is_configured": true, 00:32:49.513 "data_offset": 2048, 00:32:49.513 "data_size": 63488 00:32:49.513 } 00:32:49.513 ] 00:32:49.513 }' 00:32:49.513 00:59:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:49.513 00:59:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:50.079 00:59:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:50.079 00:59:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.338 00:59:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:32:50.338 00:59:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:50.596 [2024-07-25 00:59:13.232478] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:50.596 BaseBdev1 00:32:50.855 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:32:50.855 00:59:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:50.855 00:59:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:50.855 00:59:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:50.855 00:59:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:50.855 00:59:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:50.855 00:59:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:50.855 00:59:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:51.113 [ 00:32:51.113 { 00:32:51.113 "name": "BaseBdev1", 00:32:51.113 "aliases": [ 00:32:51.113 "6a3f0c8e-125c-4b3f-8948-835ec3e79303" 00:32:51.113 ], 00:32:51.113 "product_name": "Malloc disk", 00:32:51.113 "block_size": 512, 00:32:51.113 "num_blocks": 65536, 00:32:51.113 "uuid": "6a3f0c8e-125c-4b3f-8948-835ec3e79303", 00:32:51.113 "assigned_rate_limits": { 00:32:51.113 "rw_ios_per_sec": 0, 00:32:51.113 "rw_mbytes_per_sec": 0, 00:32:51.113 "r_mbytes_per_sec": 0, 00:32:51.113 "w_mbytes_per_sec": 0 00:32:51.113 }, 00:32:51.113 "claimed": true, 00:32:51.113 "claim_type": "exclusive_write", 00:32:51.113 "zoned": false, 00:32:51.113 "supported_io_types": { 00:32:51.113 "read": true, 00:32:51.113 "write": true, 00:32:51.113 "unmap": true, 00:32:51.113 "flush": true, 00:32:51.113 "reset": true, 00:32:51.113 "nvme_admin": false, 00:32:51.113 "nvme_io": false, 00:32:51.113 "nvme_io_md": false, 00:32:51.113 "write_zeroes": true, 00:32:51.113 "zcopy": true, 00:32:51.113 "get_zone_info": false, 00:32:51.113 "zone_management": false, 00:32:51.113 "zone_append": false, 00:32:51.113 "compare": false, 00:32:51.113 "compare_and_write": false, 00:32:51.113 "abort": true, 00:32:51.113 "seek_hole": false, 00:32:51.113 "seek_data": false, 00:32:51.113 "copy": true, 00:32:51.113 "nvme_iov_md": false 00:32:51.113 }, 00:32:51.113 "memory_domains": [ 00:32:51.113 { 00:32:51.113 "dma_device_id": "system", 00:32:51.113 "dma_device_type": 1 00:32:51.113 }, 00:32:51.113 { 00:32:51.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:51.113 "dma_device_type": 2 00:32:51.113 } 00:32:51.113 ], 00:32:51.113 "driver_specific": {} 00:32:51.113 } 00:32:51.113 ] 00:32:51.113 00:59:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:51.113 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:51.113 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:51.113 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:51.113 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:51.113 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:51.113 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:51.113 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:51.113 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:51.113 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:51.113 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:51.370 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.370 00:59:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:51.370 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:51.370 "name": "Existed_Raid", 00:32:51.370 "uuid": "c63b1bfc-da3f-4237-a4de-c7ea8751e5ab", 00:32:51.370 "strip_size_kb": 64, 00:32:51.370 "state": "configuring", 00:32:51.370 "raid_level": "raid5f", 00:32:51.370 "superblock": true, 00:32:51.370 "num_base_bdevs": 3, 00:32:51.370 "num_base_bdevs_discovered": 2, 00:32:51.370 "num_base_bdevs_operational": 3, 00:32:51.370 "base_bdevs_list": [ 00:32:51.370 { 00:32:51.370 "name": "BaseBdev1", 00:32:51.370 "uuid": "6a3f0c8e-125c-4b3f-8948-835ec3e79303", 00:32:51.370 "is_configured": true, 00:32:51.370 "data_offset": 2048, 00:32:51.370 "data_size": 63488 00:32:51.370 }, 00:32:51.370 { 00:32:51.370 "name": null, 00:32:51.370 "uuid": "a5685c07-e024-4493-a5d5-f2d53ecd89a2", 00:32:51.370 "is_configured": false, 00:32:51.370 "data_offset": 2048, 00:32:51.370 "data_size": 63488 00:32:51.370 }, 00:32:51.370 { 00:32:51.370 "name": "BaseBdev3", 00:32:51.370 "uuid": "49772a74-cf50-4f0f-8413-c80a7934a793", 00:32:51.370 "is_configured": true, 00:32:51.370 "data_offset": 2048, 00:32:51.370 "data_size": 63488 00:32:51.370 } 00:32:51.370 ] 00:32:51.370 }' 00:32:51.370 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:51.370 00:59:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:51.935 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:51.935 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.193 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:32:52.193 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:32:52.451 [2024-07-25 00:59:14.972857] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.451 00:59:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:52.709 00:59:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:52.709 "name": "Existed_Raid", 00:32:52.709 "uuid": "c63b1bfc-da3f-4237-a4de-c7ea8751e5ab", 00:32:52.709 "strip_size_kb": 64, 00:32:52.709 "state": "configuring", 00:32:52.709 "raid_level": "raid5f", 00:32:52.709 "superblock": true, 00:32:52.709 "num_base_bdevs": 3, 00:32:52.709 "num_base_bdevs_discovered": 1, 00:32:52.709 "num_base_bdevs_operational": 3, 00:32:52.709 "base_bdevs_list": [ 00:32:52.709 { 00:32:52.709 "name": "BaseBdev1", 00:32:52.709 "uuid": "6a3f0c8e-125c-4b3f-8948-835ec3e79303", 00:32:52.709 "is_configured": true, 00:32:52.709 "data_offset": 2048, 00:32:52.709 "data_size": 63488 00:32:52.709 }, 00:32:52.709 { 00:32:52.709 "name": null, 00:32:52.709 "uuid": "a5685c07-e024-4493-a5d5-f2d53ecd89a2", 00:32:52.709 "is_configured": false, 00:32:52.709 "data_offset": 2048, 00:32:52.709 "data_size": 63488 00:32:52.709 }, 00:32:52.709 { 00:32:52.709 "name": null, 00:32:52.709 "uuid": "49772a74-cf50-4f0f-8413-c80a7934a793", 00:32:52.709 "is_configured": false, 00:32:52.709 "data_offset": 2048, 00:32:52.709 "data_size": 63488 00:32:52.709 } 00:32:52.709 ] 00:32:52.709 }' 00:32:52.709 00:59:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:52.709 00:59:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:53.313 00:59:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.313 00:59:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:53.571 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:32:53.571 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:53.828 [2024-07-25 00:59:16.385160] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:53.828 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:53.829 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:53.829 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:53.829 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:53.829 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:53.829 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:53.829 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:53.829 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:53.829 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:53.829 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:53.829 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.829 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:54.087 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:54.087 "name": "Existed_Raid", 00:32:54.087 "uuid": "c63b1bfc-da3f-4237-a4de-c7ea8751e5ab", 00:32:54.087 "strip_size_kb": 64, 00:32:54.087 "state": "configuring", 00:32:54.087 "raid_level": "raid5f", 00:32:54.087 "superblock": true, 00:32:54.087 "num_base_bdevs": 3, 00:32:54.087 "num_base_bdevs_discovered": 2, 00:32:54.087 "num_base_bdevs_operational": 3, 00:32:54.087 "base_bdevs_list": [ 00:32:54.087 { 00:32:54.087 "name": "BaseBdev1", 00:32:54.087 "uuid": "6a3f0c8e-125c-4b3f-8948-835ec3e79303", 00:32:54.087 "is_configured": true, 00:32:54.087 "data_offset": 2048, 00:32:54.087 "data_size": 63488 00:32:54.087 }, 00:32:54.087 { 00:32:54.087 "name": null, 00:32:54.087 "uuid": "a5685c07-e024-4493-a5d5-f2d53ecd89a2", 00:32:54.087 "is_configured": false, 00:32:54.087 "data_offset": 2048, 00:32:54.087 "data_size": 63488 00:32:54.087 }, 00:32:54.087 { 00:32:54.087 "name": "BaseBdev3", 00:32:54.087 "uuid": "49772a74-cf50-4f0f-8413-c80a7934a793", 00:32:54.087 "is_configured": true, 00:32:54.087 "data_offset": 2048, 00:32:54.087 "data_size": 63488 00:32:54.087 } 00:32:54.087 ] 00:32:54.087 }' 00:32:54.087 00:59:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:54.087 00:59:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:54.652 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:54.652 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.910 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:32:54.910 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:55.168 [2024-07-25 00:59:17.589457] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:55.168 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:55.168 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:55.168 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:55.168 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:55.169 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:55.169 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:55.169 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:55.169 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:55.169 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:55.169 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:55.169 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.169 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:55.427 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:55.427 "name": "Existed_Raid", 00:32:55.427 "uuid": "c63b1bfc-da3f-4237-a4de-c7ea8751e5ab", 00:32:55.427 "strip_size_kb": 64, 00:32:55.427 "state": "configuring", 00:32:55.427 "raid_level": "raid5f", 00:32:55.427 "superblock": true, 00:32:55.427 "num_base_bdevs": 3, 00:32:55.427 "num_base_bdevs_discovered": 1, 00:32:55.427 "num_base_bdevs_operational": 3, 00:32:55.427 "base_bdevs_list": [ 00:32:55.427 { 00:32:55.427 "name": null, 00:32:55.427 "uuid": "6a3f0c8e-125c-4b3f-8948-835ec3e79303", 00:32:55.427 "is_configured": false, 00:32:55.427 "data_offset": 2048, 00:32:55.427 "data_size": 63488 00:32:55.427 }, 00:32:55.427 { 00:32:55.427 "name": null, 00:32:55.427 "uuid": "a5685c07-e024-4493-a5d5-f2d53ecd89a2", 00:32:55.427 "is_configured": false, 00:32:55.427 "data_offset": 2048, 00:32:55.427 "data_size": 63488 00:32:55.427 }, 00:32:55.427 { 00:32:55.427 "name": "BaseBdev3", 00:32:55.427 "uuid": "49772a74-cf50-4f0f-8413-c80a7934a793", 00:32:55.427 "is_configured": true, 00:32:55.427 "data_offset": 2048, 00:32:55.427 "data_size": 63488 00:32:55.427 } 00:32:55.427 ] 00:32:55.427 }' 00:32:55.427 00:59:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:55.427 00:59:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.994 00:59:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.994 00:59:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:56.253 00:59:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:32:56.253 00:59:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:56.512 [2024-07-25 00:59:19.015032] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:56.512 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:56.771 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:56.771 "name": "Existed_Raid", 00:32:56.771 "uuid": "c63b1bfc-da3f-4237-a4de-c7ea8751e5ab", 00:32:56.771 "strip_size_kb": 64, 00:32:56.771 "state": "configuring", 00:32:56.771 "raid_level": "raid5f", 00:32:56.771 "superblock": true, 00:32:56.771 "num_base_bdevs": 3, 00:32:56.771 "num_base_bdevs_discovered": 2, 00:32:56.771 "num_base_bdevs_operational": 3, 00:32:56.771 "base_bdevs_list": [ 00:32:56.771 { 00:32:56.771 "name": null, 00:32:56.771 "uuid": "6a3f0c8e-125c-4b3f-8948-835ec3e79303", 00:32:56.771 "is_configured": false, 00:32:56.771 "data_offset": 2048, 00:32:56.771 "data_size": 63488 00:32:56.771 }, 00:32:56.771 { 00:32:56.771 "name": "BaseBdev2", 00:32:56.771 "uuid": "a5685c07-e024-4493-a5d5-f2d53ecd89a2", 00:32:56.771 "is_configured": true, 00:32:56.771 "data_offset": 2048, 00:32:56.771 "data_size": 63488 00:32:56.771 }, 00:32:56.771 { 00:32:56.771 "name": "BaseBdev3", 00:32:56.771 "uuid": "49772a74-cf50-4f0f-8413-c80a7934a793", 00:32:56.771 "is_configured": true, 00:32:56.771 "data_offset": 2048, 00:32:56.771 "data_size": 63488 00:32:56.771 } 00:32:56.771 ] 00:32:56.771 }' 00:32:56.771 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:56.771 00:59:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.338 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.338 00:59:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:57.597 00:59:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:32:57.597 00:59:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.597 00:59:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:57.856 00:59:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6a3f0c8e-125c-4b3f-8948-835ec3e79303 00:32:58.114 [2024-07-25 00:59:20.619154] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:58.114 [2024-07-25 00:59:20.619568] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:32:58.115 [2024-07-25 00:59:20.619698] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:58.115 [2024-07-25 00:59:20.619852] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:58.115 NewBaseBdev 00:32:58.115 [2024-07-25 00:59:20.625886] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:32:58.115 [2024-07-25 00:59:20.626017] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:32:58.115 [2024-07-25 00:59:20.626370] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:58.115 00:59:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:32:58.115 00:59:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:32:58.115 00:59:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:58.115 00:59:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:58.115 00:59:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:58.115 00:59:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:58.115 00:59:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:58.373 00:59:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:58.631 [ 00:32:58.631 { 00:32:58.631 "name": "NewBaseBdev", 00:32:58.631 "aliases": [ 00:32:58.631 "6a3f0c8e-125c-4b3f-8948-835ec3e79303" 00:32:58.631 ], 00:32:58.631 "product_name": "Malloc disk", 00:32:58.631 "block_size": 512, 00:32:58.631 "num_blocks": 65536, 00:32:58.631 "uuid": "6a3f0c8e-125c-4b3f-8948-835ec3e79303", 00:32:58.631 "assigned_rate_limits": { 00:32:58.631 "rw_ios_per_sec": 0, 00:32:58.631 "rw_mbytes_per_sec": 0, 00:32:58.631 "r_mbytes_per_sec": 0, 00:32:58.631 "w_mbytes_per_sec": 0 00:32:58.631 }, 00:32:58.631 "claimed": true, 00:32:58.631 "claim_type": "exclusive_write", 00:32:58.631 "zoned": false, 00:32:58.631 "supported_io_types": { 00:32:58.631 "read": true, 00:32:58.631 "write": true, 00:32:58.631 "unmap": true, 00:32:58.631 "flush": true, 00:32:58.631 "reset": true, 00:32:58.631 "nvme_admin": false, 00:32:58.631 "nvme_io": false, 00:32:58.631 "nvme_io_md": false, 00:32:58.631 "write_zeroes": true, 00:32:58.631 "zcopy": true, 00:32:58.631 "get_zone_info": false, 00:32:58.631 "zone_management": false, 00:32:58.631 "zone_append": false, 00:32:58.631 "compare": false, 00:32:58.631 "compare_and_write": false, 00:32:58.631 "abort": true, 00:32:58.631 "seek_hole": false, 00:32:58.631 "seek_data": false, 00:32:58.631 "copy": true, 00:32:58.631 "nvme_iov_md": false 00:32:58.631 }, 00:32:58.631 "memory_domains": [ 00:32:58.631 { 00:32:58.631 "dma_device_id": "system", 00:32:58.631 "dma_device_type": 1 00:32:58.631 }, 00:32:58.631 { 00:32:58.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:58.631 "dma_device_type": 2 00:32:58.631 } 00:32:58.631 ], 00:32:58.631 "driver_specific": {} 00:32:58.631 } 00:32:58.631 ] 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.631 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:58.890 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:58.890 "name": "Existed_Raid", 00:32:58.890 "uuid": "c63b1bfc-da3f-4237-a4de-c7ea8751e5ab", 00:32:58.890 "strip_size_kb": 64, 00:32:58.890 "state": "online", 00:32:58.890 "raid_level": "raid5f", 00:32:58.890 "superblock": true, 00:32:58.890 "num_base_bdevs": 3, 00:32:58.890 "num_base_bdevs_discovered": 3, 00:32:58.890 "num_base_bdevs_operational": 3, 00:32:58.890 "base_bdevs_list": [ 00:32:58.890 { 00:32:58.890 "name": "NewBaseBdev", 00:32:58.890 "uuid": "6a3f0c8e-125c-4b3f-8948-835ec3e79303", 00:32:58.890 "is_configured": true, 00:32:58.890 "data_offset": 2048, 00:32:58.890 "data_size": 63488 00:32:58.890 }, 00:32:58.890 { 00:32:58.890 "name": "BaseBdev2", 00:32:58.890 "uuid": "a5685c07-e024-4493-a5d5-f2d53ecd89a2", 00:32:58.890 "is_configured": true, 00:32:58.890 "data_offset": 2048, 00:32:58.890 "data_size": 63488 00:32:58.890 }, 00:32:58.890 { 00:32:58.890 "name": "BaseBdev3", 00:32:58.890 "uuid": "49772a74-cf50-4f0f-8413-c80a7934a793", 00:32:58.890 "is_configured": true, 00:32:58.890 "data_offset": 2048, 00:32:58.890 "data_size": 63488 00:32:58.890 } 00:32:58.890 ] 00:32:58.890 }' 00:32:58.890 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:58.890 00:59:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:59.457 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:32:59.457 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:59.457 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:59.457 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:59.457 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:59.457 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:32:59.457 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:59.457 00:59:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:59.457 [2024-07-25 00:59:22.071497] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:59.457 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:59.457 "name": "Existed_Raid", 00:32:59.457 "aliases": [ 00:32:59.457 "c63b1bfc-da3f-4237-a4de-c7ea8751e5ab" 00:32:59.457 ], 00:32:59.457 "product_name": "Raid Volume", 00:32:59.457 "block_size": 512, 00:32:59.457 "num_blocks": 126976, 00:32:59.457 "uuid": "c63b1bfc-da3f-4237-a4de-c7ea8751e5ab", 00:32:59.457 "assigned_rate_limits": { 00:32:59.457 "rw_ios_per_sec": 0, 00:32:59.457 "rw_mbytes_per_sec": 0, 00:32:59.457 "r_mbytes_per_sec": 0, 00:32:59.457 "w_mbytes_per_sec": 0 00:32:59.457 }, 00:32:59.457 "claimed": false, 00:32:59.457 "zoned": false, 00:32:59.457 "supported_io_types": { 00:32:59.457 "read": true, 00:32:59.457 "write": true, 00:32:59.457 "unmap": false, 00:32:59.457 "flush": false, 00:32:59.457 "reset": true, 00:32:59.457 "nvme_admin": false, 00:32:59.457 "nvme_io": false, 00:32:59.457 "nvme_io_md": false, 00:32:59.457 "write_zeroes": true, 00:32:59.457 "zcopy": false, 00:32:59.457 "get_zone_info": false, 00:32:59.457 "zone_management": false, 00:32:59.457 "zone_append": false, 00:32:59.457 "compare": false, 00:32:59.457 "compare_and_write": false, 00:32:59.457 "abort": false, 00:32:59.457 "seek_hole": false, 00:32:59.457 "seek_data": false, 00:32:59.457 "copy": false, 00:32:59.457 "nvme_iov_md": false 00:32:59.457 }, 00:32:59.457 "driver_specific": { 00:32:59.457 "raid": { 00:32:59.457 "uuid": "c63b1bfc-da3f-4237-a4de-c7ea8751e5ab", 00:32:59.457 "strip_size_kb": 64, 00:32:59.457 "state": "online", 00:32:59.457 "raid_level": "raid5f", 00:32:59.457 "superblock": true, 00:32:59.457 "num_base_bdevs": 3, 00:32:59.457 "num_base_bdevs_discovered": 3, 00:32:59.457 "num_base_bdevs_operational": 3, 00:32:59.457 "base_bdevs_list": [ 00:32:59.457 { 00:32:59.457 "name": "NewBaseBdev", 00:32:59.457 "uuid": "6a3f0c8e-125c-4b3f-8948-835ec3e79303", 00:32:59.457 "is_configured": true, 00:32:59.457 "data_offset": 2048, 00:32:59.457 "data_size": 63488 00:32:59.457 }, 00:32:59.457 { 00:32:59.457 "name": "BaseBdev2", 00:32:59.457 "uuid": "a5685c07-e024-4493-a5d5-f2d53ecd89a2", 00:32:59.457 "is_configured": true, 00:32:59.457 "data_offset": 2048, 00:32:59.457 "data_size": 63488 00:32:59.457 }, 00:32:59.457 { 00:32:59.457 "name": "BaseBdev3", 00:32:59.457 "uuid": "49772a74-cf50-4f0f-8413-c80a7934a793", 00:32:59.457 "is_configured": true, 00:32:59.457 "data_offset": 2048, 00:32:59.457 "data_size": 63488 00:32:59.457 } 00:32:59.457 ] 00:32:59.457 } 00:32:59.457 } 00:32:59.457 }' 00:32:59.457 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:59.715 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:32:59.715 BaseBdev2 00:32:59.715 BaseBdev3' 00:32:59.715 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:59.715 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:32:59.715 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:59.974 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:59.974 "name": "NewBaseBdev", 00:32:59.974 "aliases": [ 00:32:59.974 "6a3f0c8e-125c-4b3f-8948-835ec3e79303" 00:32:59.974 ], 00:32:59.974 "product_name": "Malloc disk", 00:32:59.974 "block_size": 512, 00:32:59.974 "num_blocks": 65536, 00:32:59.974 "uuid": "6a3f0c8e-125c-4b3f-8948-835ec3e79303", 00:32:59.974 "assigned_rate_limits": { 00:32:59.974 "rw_ios_per_sec": 0, 00:32:59.974 "rw_mbytes_per_sec": 0, 00:32:59.974 "r_mbytes_per_sec": 0, 00:32:59.974 "w_mbytes_per_sec": 0 00:32:59.974 }, 00:32:59.974 "claimed": true, 00:32:59.974 "claim_type": "exclusive_write", 00:32:59.974 "zoned": false, 00:32:59.974 "supported_io_types": { 00:32:59.974 "read": true, 00:32:59.974 "write": true, 00:32:59.974 "unmap": true, 00:32:59.974 "flush": true, 00:32:59.974 "reset": true, 00:32:59.974 "nvme_admin": false, 00:32:59.974 "nvme_io": false, 00:32:59.974 "nvme_io_md": false, 00:32:59.974 "write_zeroes": true, 00:32:59.974 "zcopy": true, 00:32:59.974 "get_zone_info": false, 00:32:59.974 "zone_management": false, 00:32:59.974 "zone_append": false, 00:32:59.974 "compare": false, 00:32:59.974 "compare_and_write": false, 00:32:59.974 "abort": true, 00:32:59.974 "seek_hole": false, 00:32:59.974 "seek_data": false, 00:32:59.974 "copy": true, 00:32:59.974 "nvme_iov_md": false 00:32:59.974 }, 00:32:59.974 "memory_domains": [ 00:32:59.974 { 00:32:59.974 "dma_device_id": "system", 00:32:59.974 "dma_device_type": 1 00:32:59.974 }, 00:32:59.974 { 00:32:59.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:59.974 "dma_device_type": 2 00:32:59.974 } 00:32:59.974 ], 00:32:59.974 "driver_specific": {} 00:32:59.974 }' 00:32:59.974 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:59.974 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:59.974 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:59.974 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:59.974 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:59.974 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:59.974 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:00.249 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:00.249 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:00.249 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:00.249 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:00.249 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:00.249 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:00.249 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:00.249 00:59:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:00.506 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:00.506 "name": "BaseBdev2", 00:33:00.506 "aliases": [ 00:33:00.506 "a5685c07-e024-4493-a5d5-f2d53ecd89a2" 00:33:00.506 ], 00:33:00.506 "product_name": "Malloc disk", 00:33:00.506 "block_size": 512, 00:33:00.506 "num_blocks": 65536, 00:33:00.506 "uuid": "a5685c07-e024-4493-a5d5-f2d53ecd89a2", 00:33:00.506 "assigned_rate_limits": { 00:33:00.506 "rw_ios_per_sec": 0, 00:33:00.506 "rw_mbytes_per_sec": 0, 00:33:00.506 "r_mbytes_per_sec": 0, 00:33:00.506 "w_mbytes_per_sec": 0 00:33:00.506 }, 00:33:00.506 "claimed": true, 00:33:00.506 "claim_type": "exclusive_write", 00:33:00.506 "zoned": false, 00:33:00.506 "supported_io_types": { 00:33:00.506 "read": true, 00:33:00.506 "write": true, 00:33:00.506 "unmap": true, 00:33:00.506 "flush": true, 00:33:00.506 "reset": true, 00:33:00.506 "nvme_admin": false, 00:33:00.506 "nvme_io": false, 00:33:00.506 "nvme_io_md": false, 00:33:00.506 "write_zeroes": true, 00:33:00.506 "zcopy": true, 00:33:00.506 "get_zone_info": false, 00:33:00.506 "zone_management": false, 00:33:00.506 "zone_append": false, 00:33:00.506 "compare": false, 00:33:00.506 "compare_and_write": false, 00:33:00.506 "abort": true, 00:33:00.506 "seek_hole": false, 00:33:00.506 "seek_data": false, 00:33:00.506 "copy": true, 00:33:00.506 "nvme_iov_md": false 00:33:00.506 }, 00:33:00.506 "memory_domains": [ 00:33:00.506 { 00:33:00.506 "dma_device_id": "system", 00:33:00.506 "dma_device_type": 1 00:33:00.506 }, 00:33:00.506 { 00:33:00.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.506 "dma_device_type": 2 00:33:00.506 } 00:33:00.506 ], 00:33:00.506 "driver_specific": {} 00:33:00.506 }' 00:33:00.506 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:00.506 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:00.506 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:00.506 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:00.764 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:00.764 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:00.764 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:00.764 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:00.764 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:00.764 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:00.764 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:01.021 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:01.021 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:01.021 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:01.021 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:01.279 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:01.279 "name": "BaseBdev3", 00:33:01.279 "aliases": [ 00:33:01.279 "49772a74-cf50-4f0f-8413-c80a7934a793" 00:33:01.279 ], 00:33:01.279 "product_name": "Malloc disk", 00:33:01.279 "block_size": 512, 00:33:01.279 "num_blocks": 65536, 00:33:01.279 "uuid": "49772a74-cf50-4f0f-8413-c80a7934a793", 00:33:01.279 "assigned_rate_limits": { 00:33:01.279 "rw_ios_per_sec": 0, 00:33:01.279 "rw_mbytes_per_sec": 0, 00:33:01.279 "r_mbytes_per_sec": 0, 00:33:01.279 "w_mbytes_per_sec": 0 00:33:01.279 }, 00:33:01.279 "claimed": true, 00:33:01.279 "claim_type": "exclusive_write", 00:33:01.279 "zoned": false, 00:33:01.279 "supported_io_types": { 00:33:01.279 "read": true, 00:33:01.279 "write": true, 00:33:01.279 "unmap": true, 00:33:01.279 "flush": true, 00:33:01.279 "reset": true, 00:33:01.279 "nvme_admin": false, 00:33:01.279 "nvme_io": false, 00:33:01.279 "nvme_io_md": false, 00:33:01.279 "write_zeroes": true, 00:33:01.279 "zcopy": true, 00:33:01.279 "get_zone_info": false, 00:33:01.279 "zone_management": false, 00:33:01.279 "zone_append": false, 00:33:01.279 "compare": false, 00:33:01.279 "compare_and_write": false, 00:33:01.279 "abort": true, 00:33:01.279 "seek_hole": false, 00:33:01.279 "seek_data": false, 00:33:01.279 "copy": true, 00:33:01.279 "nvme_iov_md": false 00:33:01.279 }, 00:33:01.279 "memory_domains": [ 00:33:01.279 { 00:33:01.279 "dma_device_id": "system", 00:33:01.279 "dma_device_type": 1 00:33:01.279 }, 00:33:01.279 { 00:33:01.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.279 "dma_device_type": 2 00:33:01.279 } 00:33:01.279 ], 00:33:01.279 "driver_specific": {} 00:33:01.279 }' 00:33:01.279 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.279 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.279 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:01.279 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:01.279 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:01.279 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:01.279 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:01.279 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:01.537 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:01.537 00:59:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:01.537 00:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:01.537 00:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:01.537 00:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:01.795 [2024-07-25 00:59:24.315890] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:01.795 [2024-07-25 00:59:24.316103] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:01.795 [2024-07-25 00:59:24.316276] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:01.795 [2024-07-25 00:59:24.316610] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:01.795 [2024-07-25 00:59:24.316718] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:33:01.795 00:59:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 151343 00:33:01.795 00:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 151343 ']' 00:33:01.795 00:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 151343 00:33:01.795 00:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:33:01.795 00:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:01.795 00:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 151343 00:33:01.795 00:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:01.795 00:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:01.795 00:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 151343' 00:33:01.795 killing process with pid 151343 00:33:01.795 00:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 151343 00:33:01.795 [2024-07-25 00:59:24.368821] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:01.795 00:59:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 151343 00:33:02.361 [2024-07-25 00:59:24.721497] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:03.734 ************************************ 00:33:03.734 END TEST raid5f_state_function_test_sb 00:33:03.734 ************************************ 00:33:03.734 00:59:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:33:03.734 00:33:03.734 real 0m29.381s 00:33:03.734 user 0m52.995s 00:33:03.734 sys 0m4.168s 00:33:03.734 00:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.734 00:59:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:03.734 00:59:26 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:33:03.734 00:59:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:33:03.734 00:59:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.734 00:59:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:03.734 ************************************ 00:33:03.734 START TEST raid5f_superblock_test 00:33:03.734 ************************************ 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 3 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:33:03.734 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:33:03.735 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=152306 00:33:03.735 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 152306 /var/tmp/spdk-raid.sock 00:33:03.735 00:59:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 152306 ']' 00:33:03.735 00:59:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:03.735 00:59:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:03.735 00:59:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:03.735 00:59:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:03.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:03.735 00:59:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:03.735 00:59:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.992 [2024-07-25 00:59:26.423863] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:33:03.992 [2024-07-25 00:59:26.424308] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152306 ] 00:33:03.992 [2024-07-25 00:59:26.609093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.250 [2024-07-25 00:59:26.881354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.507 [2024-07-25 00:59:27.114115] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:04.765 00:59:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:04.765 00:59:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:33:04.765 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:33:04.765 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:04.765 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:33:04.765 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:33:04.765 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:04.765 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:04.765 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:04.765 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:04.765 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:33:05.024 malloc1 00:33:05.024 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:05.283 [2024-07-25 00:59:27.834236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:05.283 [2024-07-25 00:59:27.834549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:05.283 [2024-07-25 00:59:27.834707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:33:05.283 [2024-07-25 00:59:27.834803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:05.283 [2024-07-25 00:59:27.837788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:05.283 [2024-07-25 00:59:27.838025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:05.283 pt1 00:33:05.283 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:05.283 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:05.283 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:33:05.283 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:33:05.283 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:05.283 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:05.283 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:05.283 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:05.283 00:59:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:33:05.542 malloc2 00:33:05.542 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:05.800 [2024-07-25 00:59:28.379722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:05.800 [2024-07-25 00:59:28.380083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:05.800 [2024-07-25 00:59:28.380234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:33:05.800 [2024-07-25 00:59:28.380337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:05.800 [2024-07-25 00:59:28.383043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:05.800 [2024-07-25 00:59:28.383225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:05.800 pt2 00:33:05.800 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:05.800 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:05.800 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:33:05.800 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:33:05.800 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:33:05.801 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:05.801 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:05.801 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:05.801 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:33:06.059 malloc3 00:33:06.059 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:06.317 [2024-07-25 00:59:28.859609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:06.317 [2024-07-25 00:59:28.859874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:06.317 [2024-07-25 00:59:28.860016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:06.317 [2024-07-25 00:59:28.860164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:06.317 [2024-07-25 00:59:28.862864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:06.317 [2024-07-25 00:59:28.863055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:06.317 pt3 00:33:06.317 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:06.317 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:06.317 00:59:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:33:06.576 [2024-07-25 00:59:29.087768] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:06.576 [2024-07-25 00:59:29.090372] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:06.576 [2024-07-25 00:59:29.090606] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:06.576 [2024-07-25 00:59:29.090876] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:33:06.576 [2024-07-25 00:59:29.091004] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:06.576 [2024-07-25 00:59:29.091193] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:33:06.576 [2024-07-25 00:59:29.099115] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:33:06.576 [2024-07-25 00:59:29.099269] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:33:06.576 [2024-07-25 00:59:29.099563] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.576 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.833 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:06.833 "name": "raid_bdev1", 00:33:06.833 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:06.833 "strip_size_kb": 64, 00:33:06.833 "state": "online", 00:33:06.833 "raid_level": "raid5f", 00:33:06.833 "superblock": true, 00:33:06.833 "num_base_bdevs": 3, 00:33:06.833 "num_base_bdevs_discovered": 3, 00:33:06.833 "num_base_bdevs_operational": 3, 00:33:06.833 "base_bdevs_list": [ 00:33:06.833 { 00:33:06.833 "name": "pt1", 00:33:06.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:06.834 "is_configured": true, 00:33:06.834 "data_offset": 2048, 00:33:06.834 "data_size": 63488 00:33:06.834 }, 00:33:06.834 { 00:33:06.834 "name": "pt2", 00:33:06.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:06.834 "is_configured": true, 00:33:06.834 "data_offset": 2048, 00:33:06.834 "data_size": 63488 00:33:06.834 }, 00:33:06.834 { 00:33:06.834 "name": "pt3", 00:33:06.834 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:06.834 "is_configured": true, 00:33:06.834 "data_offset": 2048, 00:33:06.834 "data_size": 63488 00:33:06.834 } 00:33:06.834 ] 00:33:06.834 }' 00:33:06.834 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:06.834 00:59:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.401 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:33:07.401 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:07.401 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:07.401 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:07.401 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:07.401 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:07.401 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:07.401 00:59:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:07.660 [2024-07-25 00:59:30.141160] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:07.660 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:07.660 "name": "raid_bdev1", 00:33:07.660 "aliases": [ 00:33:07.660 "735c30c4-7906-4820-8956-580162289f1c" 00:33:07.660 ], 00:33:07.660 "product_name": "Raid Volume", 00:33:07.660 "block_size": 512, 00:33:07.660 "num_blocks": 126976, 00:33:07.660 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:07.660 "assigned_rate_limits": { 00:33:07.660 "rw_ios_per_sec": 0, 00:33:07.660 "rw_mbytes_per_sec": 0, 00:33:07.660 "r_mbytes_per_sec": 0, 00:33:07.660 "w_mbytes_per_sec": 0 00:33:07.660 }, 00:33:07.660 "claimed": false, 00:33:07.660 "zoned": false, 00:33:07.660 "supported_io_types": { 00:33:07.660 "read": true, 00:33:07.660 "write": true, 00:33:07.660 "unmap": false, 00:33:07.660 "flush": false, 00:33:07.660 "reset": true, 00:33:07.660 "nvme_admin": false, 00:33:07.660 "nvme_io": false, 00:33:07.660 "nvme_io_md": false, 00:33:07.660 "write_zeroes": true, 00:33:07.660 "zcopy": false, 00:33:07.660 "get_zone_info": false, 00:33:07.660 "zone_management": false, 00:33:07.660 "zone_append": false, 00:33:07.660 "compare": false, 00:33:07.660 "compare_and_write": false, 00:33:07.660 "abort": false, 00:33:07.660 "seek_hole": false, 00:33:07.660 "seek_data": false, 00:33:07.660 "copy": false, 00:33:07.660 "nvme_iov_md": false 00:33:07.660 }, 00:33:07.660 "driver_specific": { 00:33:07.660 "raid": { 00:33:07.660 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:07.660 "strip_size_kb": 64, 00:33:07.660 "state": "online", 00:33:07.660 "raid_level": "raid5f", 00:33:07.660 "superblock": true, 00:33:07.660 "num_base_bdevs": 3, 00:33:07.660 "num_base_bdevs_discovered": 3, 00:33:07.660 "num_base_bdevs_operational": 3, 00:33:07.660 "base_bdevs_list": [ 00:33:07.660 { 00:33:07.660 "name": "pt1", 00:33:07.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:07.660 "is_configured": true, 00:33:07.660 "data_offset": 2048, 00:33:07.660 "data_size": 63488 00:33:07.660 }, 00:33:07.660 { 00:33:07.660 "name": "pt2", 00:33:07.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:07.660 "is_configured": true, 00:33:07.660 "data_offset": 2048, 00:33:07.660 "data_size": 63488 00:33:07.660 }, 00:33:07.660 { 00:33:07.660 "name": "pt3", 00:33:07.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:07.660 "is_configured": true, 00:33:07.660 "data_offset": 2048, 00:33:07.660 "data_size": 63488 00:33:07.660 } 00:33:07.660 ] 00:33:07.660 } 00:33:07.660 } 00:33:07.660 }' 00:33:07.660 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:07.660 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:07.660 pt2 00:33:07.660 pt3' 00:33:07.660 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:07.660 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:07.660 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:07.919 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:07.919 "name": "pt1", 00:33:07.919 "aliases": [ 00:33:07.919 "00000000-0000-0000-0000-000000000001" 00:33:07.919 ], 00:33:07.919 "product_name": "passthru", 00:33:07.919 "block_size": 512, 00:33:07.919 "num_blocks": 65536, 00:33:07.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:07.919 "assigned_rate_limits": { 00:33:07.919 "rw_ios_per_sec": 0, 00:33:07.919 "rw_mbytes_per_sec": 0, 00:33:07.919 "r_mbytes_per_sec": 0, 00:33:07.919 "w_mbytes_per_sec": 0 00:33:07.919 }, 00:33:07.919 "claimed": true, 00:33:07.919 "claim_type": "exclusive_write", 00:33:07.919 "zoned": false, 00:33:07.919 "supported_io_types": { 00:33:07.919 "read": true, 00:33:07.919 "write": true, 00:33:07.919 "unmap": true, 00:33:07.919 "flush": true, 00:33:07.919 "reset": true, 00:33:07.919 "nvme_admin": false, 00:33:07.919 "nvme_io": false, 00:33:07.919 "nvme_io_md": false, 00:33:07.919 "write_zeroes": true, 00:33:07.919 "zcopy": true, 00:33:07.919 "get_zone_info": false, 00:33:07.919 "zone_management": false, 00:33:07.919 "zone_append": false, 00:33:07.919 "compare": false, 00:33:07.919 "compare_and_write": false, 00:33:07.919 "abort": true, 00:33:07.919 "seek_hole": false, 00:33:07.919 "seek_data": false, 00:33:07.919 "copy": true, 00:33:07.919 "nvme_iov_md": false 00:33:07.919 }, 00:33:07.919 "memory_domains": [ 00:33:07.919 { 00:33:07.919 "dma_device_id": "system", 00:33:07.919 "dma_device_type": 1 00:33:07.919 }, 00:33:07.919 { 00:33:07.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.919 "dma_device_type": 2 00:33:07.919 } 00:33:07.919 ], 00:33:07.919 "driver_specific": { 00:33:07.919 "passthru": { 00:33:07.919 "name": "pt1", 00:33:07.919 "base_bdev_name": "malloc1" 00:33:07.919 } 00:33:07.919 } 00:33:07.919 }' 00:33:07.919 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:07.919 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:07.919 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:07.919 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:08.177 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:08.177 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:08.177 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:08.177 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:08.177 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:08.177 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:08.177 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:08.177 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:08.177 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:08.177 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:08.177 00:59:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:08.436 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:08.436 "name": "pt2", 00:33:08.436 "aliases": [ 00:33:08.436 "00000000-0000-0000-0000-000000000002" 00:33:08.436 ], 00:33:08.436 "product_name": "passthru", 00:33:08.436 "block_size": 512, 00:33:08.436 "num_blocks": 65536, 00:33:08.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:08.436 "assigned_rate_limits": { 00:33:08.436 "rw_ios_per_sec": 0, 00:33:08.436 "rw_mbytes_per_sec": 0, 00:33:08.436 "r_mbytes_per_sec": 0, 00:33:08.436 "w_mbytes_per_sec": 0 00:33:08.436 }, 00:33:08.436 "claimed": true, 00:33:08.436 "claim_type": "exclusive_write", 00:33:08.436 "zoned": false, 00:33:08.436 "supported_io_types": { 00:33:08.436 "read": true, 00:33:08.436 "write": true, 00:33:08.436 "unmap": true, 00:33:08.436 "flush": true, 00:33:08.436 "reset": true, 00:33:08.436 "nvme_admin": false, 00:33:08.436 "nvme_io": false, 00:33:08.436 "nvme_io_md": false, 00:33:08.436 "write_zeroes": true, 00:33:08.436 "zcopy": true, 00:33:08.436 "get_zone_info": false, 00:33:08.436 "zone_management": false, 00:33:08.436 "zone_append": false, 00:33:08.436 "compare": false, 00:33:08.436 "compare_and_write": false, 00:33:08.436 "abort": true, 00:33:08.436 "seek_hole": false, 00:33:08.436 "seek_data": false, 00:33:08.436 "copy": true, 00:33:08.436 "nvme_iov_md": false 00:33:08.436 }, 00:33:08.436 "memory_domains": [ 00:33:08.436 { 00:33:08.436 "dma_device_id": "system", 00:33:08.436 "dma_device_type": 1 00:33:08.436 }, 00:33:08.436 { 00:33:08.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:08.436 "dma_device_type": 2 00:33:08.436 } 00:33:08.436 ], 00:33:08.436 "driver_specific": { 00:33:08.436 "passthru": { 00:33:08.436 "name": "pt2", 00:33:08.436 "base_bdev_name": "malloc2" 00:33:08.436 } 00:33:08.436 } 00:33:08.436 }' 00:33:08.436 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:08.436 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:08.695 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:08.695 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:08.695 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:08.695 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:08.695 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:08.695 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:08.695 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:08.695 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:08.953 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:08.953 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:08.953 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:08.953 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:33:08.953 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:09.212 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:09.212 "name": "pt3", 00:33:09.212 "aliases": [ 00:33:09.212 "00000000-0000-0000-0000-000000000003" 00:33:09.212 ], 00:33:09.212 "product_name": "passthru", 00:33:09.212 "block_size": 512, 00:33:09.212 "num_blocks": 65536, 00:33:09.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:09.212 "assigned_rate_limits": { 00:33:09.212 "rw_ios_per_sec": 0, 00:33:09.212 "rw_mbytes_per_sec": 0, 00:33:09.212 "r_mbytes_per_sec": 0, 00:33:09.212 "w_mbytes_per_sec": 0 00:33:09.212 }, 00:33:09.212 "claimed": true, 00:33:09.212 "claim_type": "exclusive_write", 00:33:09.212 "zoned": false, 00:33:09.212 "supported_io_types": { 00:33:09.212 "read": true, 00:33:09.212 "write": true, 00:33:09.212 "unmap": true, 00:33:09.212 "flush": true, 00:33:09.212 "reset": true, 00:33:09.212 "nvme_admin": false, 00:33:09.212 "nvme_io": false, 00:33:09.212 "nvme_io_md": false, 00:33:09.212 "write_zeroes": true, 00:33:09.212 "zcopy": true, 00:33:09.212 "get_zone_info": false, 00:33:09.212 "zone_management": false, 00:33:09.212 "zone_append": false, 00:33:09.212 "compare": false, 00:33:09.212 "compare_and_write": false, 00:33:09.212 "abort": true, 00:33:09.212 "seek_hole": false, 00:33:09.212 "seek_data": false, 00:33:09.212 "copy": true, 00:33:09.212 "nvme_iov_md": false 00:33:09.212 }, 00:33:09.212 "memory_domains": [ 00:33:09.212 { 00:33:09.212 "dma_device_id": "system", 00:33:09.212 "dma_device_type": 1 00:33:09.212 }, 00:33:09.212 { 00:33:09.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.212 "dma_device_type": 2 00:33:09.212 } 00:33:09.212 ], 00:33:09.212 "driver_specific": { 00:33:09.212 "passthru": { 00:33:09.212 "name": "pt3", 00:33:09.212 "base_bdev_name": "malloc3" 00:33:09.212 } 00:33:09.212 } 00:33:09.212 }' 00:33:09.212 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:09.212 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:09.212 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:09.212 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:09.212 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:09.470 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:09.470 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:09.470 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:09.470 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:09.470 00:59:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:09.470 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:09.470 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:09.470 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:33:09.470 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:09.728 [2024-07-25 00:59:32.349707] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:09.728 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=735c30c4-7906-4820-8956-580162289f1c 00:33:09.728 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 735c30c4-7906-4820-8956-580162289f1c ']' 00:33:09.728 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:09.987 [2024-07-25 00:59:32.609588] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:09.987 [2024-07-25 00:59:32.609765] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:09.987 [2024-07-25 00:59:32.609980] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:09.987 [2024-07-25 00:59:32.610146] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:09.987 [2024-07-25 00:59:32.610241] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:33:09.987 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.987 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:33:10.245 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:33:10.245 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:33:10.245 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:10.245 00:59:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:10.503 00:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:10.503 00:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:10.762 00:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:10.762 00:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:11.020 00:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:11.020 00:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:11.278 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:33:11.278 [2024-07-25 00:59:33.917887] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:11.278 [2024-07-25 00:59:33.921719] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:11.278 [2024-07-25 00:59:33.921927] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:33:11.278 [2024-07-25 00:59:33.922015] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:11.278 [2024-07-25 00:59:33.922286] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:11.279 [2024-07-25 00:59:33.922432] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:33:11.279 [2024-07-25 00:59:33.922492] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:11.279 [2024-07-25 00:59:33.922582] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:33:11.279 request: 00:33:11.279 { 00:33:11.279 "name": "raid_bdev1", 00:33:11.279 "raid_level": "raid5f", 00:33:11.279 "base_bdevs": [ 00:33:11.279 "malloc1", 00:33:11.279 "malloc2", 00:33:11.279 "malloc3" 00:33:11.279 ], 00:33:11.279 "strip_size_kb": 64, 00:33:11.279 "superblock": false, 00:33:11.279 "method": "bdev_raid_create", 00:33:11.279 "req_id": 1 00:33:11.279 } 00:33:11.279 Got JSON-RPC error response 00:33:11.279 response: 00:33:11.279 { 00:33:11.279 "code": -17, 00:33:11.279 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:11.279 } 00:33:11.537 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:33:11.537 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:11.537 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:11.537 00:59:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:11.537 00:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:33:11.537 00:59:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:11.537 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:33:11.537 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:33:11.537 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:11.795 [2024-07-25 00:59:34.338205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:11.795 [2024-07-25 00:59:34.338448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:11.795 [2024-07-25 00:59:34.338545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:11.795 [2024-07-25 00:59:34.338652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:11.795 [2024-07-25 00:59:34.341199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:11.795 [2024-07-25 00:59:34.341361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:11.795 [2024-07-25 00:59:34.341552] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:11.795 [2024-07-25 00:59:34.341685] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:11.795 pt1 00:33:11.795 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:33:11.795 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:11.795 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:11.795 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:11.796 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:11.796 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:11.796 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:11.796 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:11.796 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:11.796 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:11.796 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:11.796 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.053 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:12.053 "name": "raid_bdev1", 00:33:12.053 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:12.053 "strip_size_kb": 64, 00:33:12.053 "state": "configuring", 00:33:12.053 "raid_level": "raid5f", 00:33:12.053 "superblock": true, 00:33:12.053 "num_base_bdevs": 3, 00:33:12.054 "num_base_bdevs_discovered": 1, 00:33:12.054 "num_base_bdevs_operational": 3, 00:33:12.054 "base_bdevs_list": [ 00:33:12.054 { 00:33:12.054 "name": "pt1", 00:33:12.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:12.054 "is_configured": true, 00:33:12.054 "data_offset": 2048, 00:33:12.054 "data_size": 63488 00:33:12.054 }, 00:33:12.054 { 00:33:12.054 "name": null, 00:33:12.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:12.054 "is_configured": false, 00:33:12.054 "data_offset": 2048, 00:33:12.054 "data_size": 63488 00:33:12.054 }, 00:33:12.054 { 00:33:12.054 "name": null, 00:33:12.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:12.054 "is_configured": false, 00:33:12.054 "data_offset": 2048, 00:33:12.054 "data_size": 63488 00:33:12.054 } 00:33:12.054 ] 00:33:12.054 }' 00:33:12.054 00:59:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:12.054 00:59:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:12.619 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:33:12.619 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:12.877 [2024-07-25 00:59:35.390565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:12.877 [2024-07-25 00:59:35.390839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.877 [2024-07-25 00:59:35.390912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:12.877 [2024-07-25 00:59:35.391009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.877 [2024-07-25 00:59:35.391529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.877 [2024-07-25 00:59:35.391675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:12.877 [2024-07-25 00:59:35.392006] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:12.877 [2024-07-25 00:59:35.392064] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:12.877 pt2 00:33:12.877 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:13.135 [2024-07-25 00:59:35.598666] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.135 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.392 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:13.392 "name": "raid_bdev1", 00:33:13.392 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:13.392 "strip_size_kb": 64, 00:33:13.392 "state": "configuring", 00:33:13.392 "raid_level": "raid5f", 00:33:13.392 "superblock": true, 00:33:13.392 "num_base_bdevs": 3, 00:33:13.392 "num_base_bdevs_discovered": 1, 00:33:13.392 "num_base_bdevs_operational": 3, 00:33:13.392 "base_bdevs_list": [ 00:33:13.392 { 00:33:13.392 "name": "pt1", 00:33:13.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:13.392 "is_configured": true, 00:33:13.392 "data_offset": 2048, 00:33:13.392 "data_size": 63488 00:33:13.392 }, 00:33:13.392 { 00:33:13.392 "name": null, 00:33:13.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:13.392 "is_configured": false, 00:33:13.392 "data_offset": 2048, 00:33:13.392 "data_size": 63488 00:33:13.392 }, 00:33:13.392 { 00:33:13.392 "name": null, 00:33:13.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:13.392 "is_configured": false, 00:33:13.392 "data_offset": 2048, 00:33:13.392 "data_size": 63488 00:33:13.392 } 00:33:13.392 ] 00:33:13.392 }' 00:33:13.392 00:59:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:13.392 00:59:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:13.993 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:33:13.993 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:13.993 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:14.252 [2024-07-25 00:59:36.670886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:14.252 [2024-07-25 00:59:36.671168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.252 [2024-07-25 00:59:36.671238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:14.252 [2024-07-25 00:59:36.671337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.252 [2024-07-25 00:59:36.671887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.252 [2024-07-25 00:59:36.672041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:14.252 [2024-07-25 00:59:36.672259] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:14.252 [2024-07-25 00:59:36.672382] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:14.252 pt2 00:33:14.252 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:14.252 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:14.252 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:14.510 [2024-07-25 00:59:36.935022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:14.510 [2024-07-25 00:59:36.935237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.510 [2024-07-25 00:59:36.935342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:33:14.510 [2024-07-25 00:59:36.935486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.510 [2024-07-25 00:59:36.936036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.510 [2024-07-25 00:59:36.936176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:14.510 [2024-07-25 00:59:36.936432] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:14.510 [2024-07-25 00:59:36.936568] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:14.510 [2024-07-25 00:59:36.936790] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:33:14.510 [2024-07-25 00:59:36.936900] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:14.510 [2024-07-25 00:59:36.937032] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:14.510 [2024-07-25 00:59:36.942940] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:33:14.510 [2024-07-25 00:59:36.943084] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:33:14.510 [2024-07-25 00:59:36.943364] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:14.510 pt3 00:33:14.510 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:14.510 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:14.510 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:14.510 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:14.511 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:14.511 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:14.511 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:14.511 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:14.511 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:14.511 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:14.511 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:14.511 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:14.511 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:14.511 00:59:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:14.769 00:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:14.769 "name": "raid_bdev1", 00:33:14.769 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:14.769 "strip_size_kb": 64, 00:33:14.769 "state": "online", 00:33:14.769 "raid_level": "raid5f", 00:33:14.769 "superblock": true, 00:33:14.769 "num_base_bdevs": 3, 00:33:14.769 "num_base_bdevs_discovered": 3, 00:33:14.769 "num_base_bdevs_operational": 3, 00:33:14.769 "base_bdevs_list": [ 00:33:14.769 { 00:33:14.769 "name": "pt1", 00:33:14.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:14.769 "is_configured": true, 00:33:14.769 "data_offset": 2048, 00:33:14.769 "data_size": 63488 00:33:14.769 }, 00:33:14.769 { 00:33:14.769 "name": "pt2", 00:33:14.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:14.769 "is_configured": true, 00:33:14.769 "data_offset": 2048, 00:33:14.769 "data_size": 63488 00:33:14.769 }, 00:33:14.769 { 00:33:14.769 "name": "pt3", 00:33:14.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:14.769 "is_configured": true, 00:33:14.769 "data_offset": 2048, 00:33:14.769 "data_size": 63488 00:33:14.769 } 00:33:14.769 ] 00:33:14.769 }' 00:33:14.769 00:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:14.769 00:59:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.337 00:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:33:15.337 00:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:15.337 00:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:15.337 00:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:15.337 00:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:15.337 00:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:15.337 00:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:15.337 00:59:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:15.596 [2024-07-25 00:59:38.011705] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:15.596 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:15.596 "name": "raid_bdev1", 00:33:15.596 "aliases": [ 00:33:15.596 "735c30c4-7906-4820-8956-580162289f1c" 00:33:15.596 ], 00:33:15.596 "product_name": "Raid Volume", 00:33:15.596 "block_size": 512, 00:33:15.596 "num_blocks": 126976, 00:33:15.596 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:15.596 "assigned_rate_limits": { 00:33:15.596 "rw_ios_per_sec": 0, 00:33:15.596 "rw_mbytes_per_sec": 0, 00:33:15.596 "r_mbytes_per_sec": 0, 00:33:15.596 "w_mbytes_per_sec": 0 00:33:15.596 }, 00:33:15.596 "claimed": false, 00:33:15.596 "zoned": false, 00:33:15.596 "supported_io_types": { 00:33:15.596 "read": true, 00:33:15.596 "write": true, 00:33:15.596 "unmap": false, 00:33:15.596 "flush": false, 00:33:15.596 "reset": true, 00:33:15.596 "nvme_admin": false, 00:33:15.596 "nvme_io": false, 00:33:15.596 "nvme_io_md": false, 00:33:15.596 "write_zeroes": true, 00:33:15.596 "zcopy": false, 00:33:15.596 "get_zone_info": false, 00:33:15.596 "zone_management": false, 00:33:15.596 "zone_append": false, 00:33:15.596 "compare": false, 00:33:15.596 "compare_and_write": false, 00:33:15.596 "abort": false, 00:33:15.596 "seek_hole": false, 00:33:15.596 "seek_data": false, 00:33:15.596 "copy": false, 00:33:15.596 "nvme_iov_md": false 00:33:15.596 }, 00:33:15.596 "driver_specific": { 00:33:15.596 "raid": { 00:33:15.596 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:15.596 "strip_size_kb": 64, 00:33:15.596 "state": "online", 00:33:15.596 "raid_level": "raid5f", 00:33:15.596 "superblock": true, 00:33:15.596 "num_base_bdevs": 3, 00:33:15.596 "num_base_bdevs_discovered": 3, 00:33:15.596 "num_base_bdevs_operational": 3, 00:33:15.596 "base_bdevs_list": [ 00:33:15.596 { 00:33:15.596 "name": "pt1", 00:33:15.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:15.596 "is_configured": true, 00:33:15.596 "data_offset": 2048, 00:33:15.596 "data_size": 63488 00:33:15.596 }, 00:33:15.596 { 00:33:15.596 "name": "pt2", 00:33:15.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:15.596 "is_configured": true, 00:33:15.596 "data_offset": 2048, 00:33:15.596 "data_size": 63488 00:33:15.596 }, 00:33:15.596 { 00:33:15.596 "name": "pt3", 00:33:15.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:15.596 "is_configured": true, 00:33:15.596 "data_offset": 2048, 00:33:15.596 "data_size": 63488 00:33:15.596 } 00:33:15.596 ] 00:33:15.596 } 00:33:15.596 } 00:33:15.596 }' 00:33:15.596 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:15.596 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:15.596 pt2 00:33:15.596 pt3' 00:33:15.596 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:15.596 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:15.596 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:15.855 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:15.855 "name": "pt1", 00:33:15.855 "aliases": [ 00:33:15.855 "00000000-0000-0000-0000-000000000001" 00:33:15.855 ], 00:33:15.855 "product_name": "passthru", 00:33:15.855 "block_size": 512, 00:33:15.855 "num_blocks": 65536, 00:33:15.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:15.855 "assigned_rate_limits": { 00:33:15.855 "rw_ios_per_sec": 0, 00:33:15.855 "rw_mbytes_per_sec": 0, 00:33:15.855 "r_mbytes_per_sec": 0, 00:33:15.855 "w_mbytes_per_sec": 0 00:33:15.855 }, 00:33:15.855 "claimed": true, 00:33:15.855 "claim_type": "exclusive_write", 00:33:15.855 "zoned": false, 00:33:15.855 "supported_io_types": { 00:33:15.855 "read": true, 00:33:15.855 "write": true, 00:33:15.855 "unmap": true, 00:33:15.855 "flush": true, 00:33:15.855 "reset": true, 00:33:15.855 "nvme_admin": false, 00:33:15.855 "nvme_io": false, 00:33:15.855 "nvme_io_md": false, 00:33:15.855 "write_zeroes": true, 00:33:15.855 "zcopy": true, 00:33:15.855 "get_zone_info": false, 00:33:15.855 "zone_management": false, 00:33:15.855 "zone_append": false, 00:33:15.856 "compare": false, 00:33:15.856 "compare_and_write": false, 00:33:15.856 "abort": true, 00:33:15.856 "seek_hole": false, 00:33:15.856 "seek_data": false, 00:33:15.856 "copy": true, 00:33:15.856 "nvme_iov_md": false 00:33:15.856 }, 00:33:15.856 "memory_domains": [ 00:33:15.856 { 00:33:15.856 "dma_device_id": "system", 00:33:15.856 "dma_device_type": 1 00:33:15.856 }, 00:33:15.856 { 00:33:15.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.856 "dma_device_type": 2 00:33:15.856 } 00:33:15.856 ], 00:33:15.856 "driver_specific": { 00:33:15.856 "passthru": { 00:33:15.856 "name": "pt1", 00:33:15.856 "base_bdev_name": "malloc1" 00:33:15.856 } 00:33:15.856 } 00:33:15.856 }' 00:33:15.856 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:15.856 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:15.856 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:15.856 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:15.856 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:16.114 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:16.114 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:16.114 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:16.114 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:16.114 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:16.114 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:16.114 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:16.114 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:16.114 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:16.114 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:16.373 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:16.373 "name": "pt2", 00:33:16.373 "aliases": [ 00:33:16.373 "00000000-0000-0000-0000-000000000002" 00:33:16.373 ], 00:33:16.373 "product_name": "passthru", 00:33:16.373 "block_size": 512, 00:33:16.373 "num_blocks": 65536, 00:33:16.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:16.373 "assigned_rate_limits": { 00:33:16.373 "rw_ios_per_sec": 0, 00:33:16.373 "rw_mbytes_per_sec": 0, 00:33:16.373 "r_mbytes_per_sec": 0, 00:33:16.373 "w_mbytes_per_sec": 0 00:33:16.373 }, 00:33:16.373 "claimed": true, 00:33:16.373 "claim_type": "exclusive_write", 00:33:16.373 "zoned": false, 00:33:16.373 "supported_io_types": { 00:33:16.373 "read": true, 00:33:16.373 "write": true, 00:33:16.373 "unmap": true, 00:33:16.373 "flush": true, 00:33:16.373 "reset": true, 00:33:16.373 "nvme_admin": false, 00:33:16.373 "nvme_io": false, 00:33:16.373 "nvme_io_md": false, 00:33:16.373 "write_zeroes": true, 00:33:16.373 "zcopy": true, 00:33:16.373 "get_zone_info": false, 00:33:16.373 "zone_management": false, 00:33:16.373 "zone_append": false, 00:33:16.373 "compare": false, 00:33:16.373 "compare_and_write": false, 00:33:16.373 "abort": true, 00:33:16.373 "seek_hole": false, 00:33:16.373 "seek_data": false, 00:33:16.373 "copy": true, 00:33:16.373 "nvme_iov_md": false 00:33:16.373 }, 00:33:16.373 "memory_domains": [ 00:33:16.373 { 00:33:16.373 "dma_device_id": "system", 00:33:16.373 "dma_device_type": 1 00:33:16.373 }, 00:33:16.373 { 00:33:16.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:16.373 "dma_device_type": 2 00:33:16.373 } 00:33:16.373 ], 00:33:16.373 "driver_specific": { 00:33:16.373 "passthru": { 00:33:16.373 "name": "pt2", 00:33:16.373 "base_bdev_name": "malloc2" 00:33:16.373 } 00:33:16.373 } 00:33:16.373 }' 00:33:16.373 00:59:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:16.631 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:33:16.889 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:16.889 "name": "pt3", 00:33:16.889 "aliases": [ 00:33:16.889 "00000000-0000-0000-0000-000000000003" 00:33:16.889 ], 00:33:16.889 "product_name": "passthru", 00:33:16.889 "block_size": 512, 00:33:16.889 "num_blocks": 65536, 00:33:16.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:16.889 "assigned_rate_limits": { 00:33:16.890 "rw_ios_per_sec": 0, 00:33:16.890 "rw_mbytes_per_sec": 0, 00:33:16.890 "r_mbytes_per_sec": 0, 00:33:16.890 "w_mbytes_per_sec": 0 00:33:16.890 }, 00:33:16.890 "claimed": true, 00:33:16.890 "claim_type": "exclusive_write", 00:33:16.890 "zoned": false, 00:33:16.890 "supported_io_types": { 00:33:16.890 "read": true, 00:33:16.890 "write": true, 00:33:16.890 "unmap": true, 00:33:16.890 "flush": true, 00:33:16.890 "reset": true, 00:33:16.890 "nvme_admin": false, 00:33:16.890 "nvme_io": false, 00:33:16.890 "nvme_io_md": false, 00:33:16.890 "write_zeroes": true, 00:33:16.890 "zcopy": true, 00:33:16.890 "get_zone_info": false, 00:33:16.890 "zone_management": false, 00:33:16.890 "zone_append": false, 00:33:16.890 "compare": false, 00:33:16.890 "compare_and_write": false, 00:33:16.890 "abort": true, 00:33:16.890 "seek_hole": false, 00:33:16.890 "seek_data": false, 00:33:16.890 "copy": true, 00:33:16.890 "nvme_iov_md": false 00:33:16.890 }, 00:33:16.890 "memory_domains": [ 00:33:16.890 { 00:33:16.890 "dma_device_id": "system", 00:33:16.890 "dma_device_type": 1 00:33:16.890 }, 00:33:16.890 { 00:33:16.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:16.890 "dma_device_type": 2 00:33:16.890 } 00:33:16.890 ], 00:33:16.890 "driver_specific": { 00:33:16.890 "passthru": { 00:33:16.890 "name": "pt3", 00:33:16.890 "base_bdev_name": "malloc3" 00:33:16.890 } 00:33:16.890 } 00:33:16.890 }' 00:33:16.890 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:17.149 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:17.149 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:17.149 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:17.149 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:17.149 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:17.149 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:17.149 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:17.149 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:17.149 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:17.407 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:17.407 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:17.407 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:17.407 00:59:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:33:17.407 [2024-07-25 00:59:40.059016] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 735c30c4-7906-4820-8956-580162289f1c '!=' 735c30c4-7906-4820-8956-580162289f1c ']' 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:17.666 [2024-07-25 00:59:40.262923] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.666 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.924 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:17.924 "name": "raid_bdev1", 00:33:17.924 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:17.924 "strip_size_kb": 64, 00:33:17.924 "state": "online", 00:33:17.924 "raid_level": "raid5f", 00:33:17.924 "superblock": true, 00:33:17.924 "num_base_bdevs": 3, 00:33:17.924 "num_base_bdevs_discovered": 2, 00:33:17.924 "num_base_bdevs_operational": 2, 00:33:17.924 "base_bdevs_list": [ 00:33:17.924 { 00:33:17.924 "name": null, 00:33:17.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.924 "is_configured": false, 00:33:17.924 "data_offset": 2048, 00:33:17.924 "data_size": 63488 00:33:17.924 }, 00:33:17.924 { 00:33:17.924 "name": "pt2", 00:33:17.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:17.924 "is_configured": true, 00:33:17.924 "data_offset": 2048, 00:33:17.924 "data_size": 63488 00:33:17.924 }, 00:33:17.924 { 00:33:17.924 "name": "pt3", 00:33:17.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:17.924 "is_configured": true, 00:33:17.924 "data_offset": 2048, 00:33:17.924 "data_size": 63488 00:33:17.924 } 00:33:17.924 ] 00:33:17.924 }' 00:33:17.924 00:59:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:17.924 00:59:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:18.491 00:59:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:18.749 [2024-07-25 00:59:41.283100] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:18.749 [2024-07-25 00:59:41.283315] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:18.749 [2024-07-25 00:59:41.283479] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:18.749 [2024-07-25 00:59:41.283575] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:18.749 [2024-07-25 00:59:41.283788] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:33:18.749 00:59:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.749 00:59:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:33:19.007 00:59:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:33:19.007 00:59:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:33:19.007 00:59:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:33:19.007 00:59:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:19.007 00:59:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:19.266 00:59:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:19.266 00:59:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:19.266 00:59:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:19.525 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:19.525 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:19.525 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:33:19.525 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:19.525 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:19.784 [2024-07-25 00:59:42.359320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:19.784 [2024-07-25 00:59:42.360075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:19.784 [2024-07-25 00:59:42.360399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:19.784 [2024-07-25 00:59:42.360668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:19.784 [2024-07-25 00:59:42.363665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:19.784 [2024-07-25 00:59:42.363970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:19.784 [2024-07-25 00:59:42.364364] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:19.784 [2024-07-25 00:59:42.364546] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:19.784 pt2 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.784 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.043 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:20.043 "name": "raid_bdev1", 00:33:20.043 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:20.043 "strip_size_kb": 64, 00:33:20.043 "state": "configuring", 00:33:20.043 "raid_level": "raid5f", 00:33:20.043 "superblock": true, 00:33:20.043 "num_base_bdevs": 3, 00:33:20.043 "num_base_bdevs_discovered": 1, 00:33:20.043 "num_base_bdevs_operational": 2, 00:33:20.043 "base_bdevs_list": [ 00:33:20.043 { 00:33:20.043 "name": null, 00:33:20.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:20.043 "is_configured": false, 00:33:20.043 "data_offset": 2048, 00:33:20.043 "data_size": 63488 00:33:20.043 }, 00:33:20.043 { 00:33:20.043 "name": "pt2", 00:33:20.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:20.043 "is_configured": true, 00:33:20.043 "data_offset": 2048, 00:33:20.043 "data_size": 63488 00:33:20.043 }, 00:33:20.043 { 00:33:20.043 "name": null, 00:33:20.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:20.043 "is_configured": false, 00:33:20.043 "data_offset": 2048, 00:33:20.043 "data_size": 63488 00:33:20.043 } 00:33:20.043 ] 00:33:20.043 }' 00:33:20.043 00:59:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:20.043 00:59:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.611 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:33:20.611 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:20.611 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:33:20.611 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:20.870 [2024-07-25 00:59:43.396690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:20.870 [2024-07-25 00:59:43.397314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:20.870 [2024-07-25 00:59:43.397648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:33:20.870 [2024-07-25 00:59:43.397924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:20.870 [2024-07-25 00:59:43.398659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:20.870 [2024-07-25 00:59:43.398938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:20.870 [2024-07-25 00:59:43.399302] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:20.870 [2024-07-25 00:59:43.399434] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:20.870 [2024-07-25 00:59:43.399637] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:33:20.870 [2024-07-25 00:59:43.399768] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:20.870 [2024-07-25 00:59:43.399939] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:20.870 [2024-07-25 00:59:43.405937] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:33:20.870 [2024-07-25 00:59:43.406089] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:33:20.870 [2024-07-25 00:59:43.406564] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:20.870 pt3 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.870 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.129 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:21.129 "name": "raid_bdev1", 00:33:21.129 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:21.129 "strip_size_kb": 64, 00:33:21.129 "state": "online", 00:33:21.129 "raid_level": "raid5f", 00:33:21.129 "superblock": true, 00:33:21.129 "num_base_bdevs": 3, 00:33:21.129 "num_base_bdevs_discovered": 2, 00:33:21.129 "num_base_bdevs_operational": 2, 00:33:21.129 "base_bdevs_list": [ 00:33:21.129 { 00:33:21.129 "name": null, 00:33:21.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.129 "is_configured": false, 00:33:21.129 "data_offset": 2048, 00:33:21.129 "data_size": 63488 00:33:21.129 }, 00:33:21.129 { 00:33:21.129 "name": "pt2", 00:33:21.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:21.129 "is_configured": true, 00:33:21.129 "data_offset": 2048, 00:33:21.129 "data_size": 63488 00:33:21.129 }, 00:33:21.129 { 00:33:21.129 "name": "pt3", 00:33:21.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:21.129 "is_configured": true, 00:33:21.129 "data_offset": 2048, 00:33:21.129 "data_size": 63488 00:33:21.129 } 00:33:21.129 ] 00:33:21.129 }' 00:33:21.129 00:59:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:21.129 00:59:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.696 00:59:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:21.955 [2024-07-25 00:59:44.443846] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:21.955 [2024-07-25 00:59:44.444064] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:21.955 [2024-07-25 00:59:44.444262] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:21.955 [2024-07-25 00:59:44.444402] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:21.955 [2024-07-25 00:59:44.444482] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:33:21.955 00:59:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.955 00:59:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:33:22.214 00:59:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:33:22.214 00:59:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:33:22.214 00:59:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:33:22.214 00:59:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:33:22.214 00:59:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:22.474 00:59:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:22.474 [2024-07-25 00:59:45.124010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:22.474 [2024-07-25 00:59:45.125072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:22.474 [2024-07-25 00:59:45.125575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:33:22.733 [2024-07-25 00:59:45.125961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:22.733 [2024-07-25 00:59:45.129005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:22.733 [2024-07-25 00:59:45.129409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:22.733 [2024-07-25 00:59:45.129783] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:22.733 [2024-07-25 00:59:45.129968] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:22.733 [2024-07-25 00:59:45.130289] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:22.733 [2024-07-25 00:59:45.130408] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:22.733 [2024-07-25 00:59:45.130466] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:33:22.733 [2024-07-25 00:59:45.130604] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:22.733 pt1 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:22.733 "name": "raid_bdev1", 00:33:22.733 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:22.733 "strip_size_kb": 64, 00:33:22.733 "state": "configuring", 00:33:22.733 "raid_level": "raid5f", 00:33:22.733 "superblock": true, 00:33:22.733 "num_base_bdevs": 3, 00:33:22.733 "num_base_bdevs_discovered": 1, 00:33:22.733 "num_base_bdevs_operational": 2, 00:33:22.733 "base_bdevs_list": [ 00:33:22.733 { 00:33:22.733 "name": null, 00:33:22.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.733 "is_configured": false, 00:33:22.733 "data_offset": 2048, 00:33:22.733 "data_size": 63488 00:33:22.733 }, 00:33:22.733 { 00:33:22.733 "name": "pt2", 00:33:22.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:22.733 "is_configured": true, 00:33:22.733 "data_offset": 2048, 00:33:22.733 "data_size": 63488 00:33:22.733 }, 00:33:22.733 { 00:33:22.733 "name": null, 00:33:22.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:22.733 "is_configured": false, 00:33:22.733 "data_offset": 2048, 00:33:22.733 "data_size": 63488 00:33:22.733 } 00:33:22.733 ] 00:33:22.733 }' 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:22.733 00:59:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.300 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:33:23.300 00:59:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:23.559 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:33:23.559 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:23.818 [2024-07-25 00:59:46.398178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:23.818 [2024-07-25 00:59:46.398869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:23.818 [2024-07-25 00:59:46.399180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:33:23.818 [2024-07-25 00:59:46.399427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:23.818 [2024-07-25 00:59:46.400202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:23.818 [2024-07-25 00:59:46.400476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:23.818 [2024-07-25 00:59:46.400859] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:23.818 [2024-07-25 00:59:46.401018] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:23.818 [2024-07-25 00:59:46.401199] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:33:23.818 [2024-07-25 00:59:46.401350] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:23.818 [2024-07-25 00:59:46.401505] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:33:23.819 [2024-07-25 00:59:46.407589] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:33:23.819 [2024-07-25 00:59:46.407728] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:33:23.819 [2024-07-25 00:59:46.408110] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:23.819 pt3 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.819 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.078 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:24.078 "name": "raid_bdev1", 00:33:24.078 "uuid": "735c30c4-7906-4820-8956-580162289f1c", 00:33:24.078 "strip_size_kb": 64, 00:33:24.078 "state": "online", 00:33:24.078 "raid_level": "raid5f", 00:33:24.078 "superblock": true, 00:33:24.078 "num_base_bdevs": 3, 00:33:24.078 "num_base_bdevs_discovered": 2, 00:33:24.078 "num_base_bdevs_operational": 2, 00:33:24.078 "base_bdevs_list": [ 00:33:24.078 { 00:33:24.078 "name": null, 00:33:24.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.078 "is_configured": false, 00:33:24.078 "data_offset": 2048, 00:33:24.078 "data_size": 63488 00:33:24.078 }, 00:33:24.078 { 00:33:24.078 "name": "pt2", 00:33:24.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:24.078 "is_configured": true, 00:33:24.078 "data_offset": 2048, 00:33:24.078 "data_size": 63488 00:33:24.078 }, 00:33:24.078 { 00:33:24.078 "name": "pt3", 00:33:24.078 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:24.078 "is_configured": true, 00:33:24.078 "data_offset": 2048, 00:33:24.078 "data_size": 63488 00:33:24.078 } 00:33:24.078 ] 00:33:24.078 }' 00:33:24.078 00:59:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:24.078 00:59:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.644 00:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:33:24.644 00:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:24.902 00:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:33:24.903 00:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:24.903 00:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:33:25.161 [2024-07-25 00:59:47.580877] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 735c30c4-7906-4820-8956-580162289f1c '!=' 735c30c4-7906-4820-8956-580162289f1c ']' 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 152306 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 152306 ']' 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 152306 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 152306 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 152306' 00:33:25.161 killing process with pid 152306 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 152306 00:33:25.161 [2024-07-25 00:59:47.632338] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:25.161 00:59:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 152306 00:33:25.161 [2024-07-25 00:59:47.632516] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:25.161 [2024-07-25 00:59:47.632580] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:25.161 [2024-07-25 00:59:47.632589] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:33:25.419 [2024-07-25 00:59:47.950805] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:26.806 ************************************ 00:33:26.806 END TEST raid5f_superblock_test 00:33:26.806 ************************************ 00:33:26.806 00:59:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:33:26.806 00:33:26.806 real 0m22.987s 00:33:26.806 user 0m40.994s 00:33:26.806 sys 0m3.606s 00:33:26.806 00:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:26.806 00:59:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.806 00:59:49 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:33:26.806 00:59:49 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:33:26.806 00:59:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:26.806 00:59:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:26.806 00:59:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:26.806 ************************************ 00:33:26.806 START TEST raid5f_rebuild_test 00:33:26.806 ************************************ 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 false false true 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=153048 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 153048 /var/tmp/spdk-raid.sock 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 153048 ']' 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:26.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:26.806 00:59:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.064 [2024-07-25 00:59:49.478545] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:33:27.064 [2024-07-25 00:59:49.478876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153048 ] 00:33:27.064 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:27.064 Zero copy mechanism will not be used. 00:33:27.064 [2024-07-25 00:59:49.644238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.322 [2024-07-25 00:59:49.883841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.580 [2024-07-25 00:59:50.090563] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:27.838 00:59:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:27.838 00:59:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:33:27.838 00:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:27.838 00:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:28.097 BaseBdev1_malloc 00:33:28.097 00:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:28.097 [2024-07-25 00:59:50.739159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:28.097 [2024-07-25 00:59:50.739408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:28.097 [2024-07-25 00:59:50.739486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:33:28.097 [2024-07-25 00:59:50.739590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:28.097 [2024-07-25 00:59:50.741881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:28.097 [2024-07-25 00:59:50.742036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:28.097 BaseBdev1 00:33:28.356 00:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:28.356 00:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:28.356 BaseBdev2_malloc 00:33:28.356 00:59:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:28.615 [2024-07-25 00:59:51.243992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:28.615 [2024-07-25 00:59:51.244247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:28.615 [2024-07-25 00:59:51.244321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:33:28.615 [2024-07-25 00:59:51.244481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:28.615 [2024-07-25 00:59:51.246735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:28.615 [2024-07-25 00:59:51.246891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:28.615 BaseBdev2 00:33:28.615 00:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:28.615 00:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:28.873 BaseBdev3_malloc 00:33:28.873 00:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:29.131 [2024-07-25 00:59:51.644224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:29.131 [2024-07-25 00:59:51.644464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.131 [2024-07-25 00:59:51.644578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:29.131 [2024-07-25 00:59:51.644671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.131 [2024-07-25 00:59:51.646967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.131 [2024-07-25 00:59:51.647119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:29.131 BaseBdev3 00:33:29.131 00:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:33:29.389 spare_malloc 00:33:29.389 00:59:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:29.647 spare_delay 00:33:29.647 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:29.647 [2024-07-25 00:59:52.224144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:29.647 [2024-07-25 00:59:52.224400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.647 [2024-07-25 00:59:52.224472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:29.647 [2024-07-25 00:59:52.224604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.647 [2024-07-25 00:59:52.226952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.647 [2024-07-25 00:59:52.227103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:29.647 spare 00:33:29.647 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:33:29.905 [2024-07-25 00:59:52.460271] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:29.905 [2024-07-25 00:59:52.462351] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:29.905 [2024-07-25 00:59:52.462523] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:29.905 [2024-07-25 00:59:52.462723] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:33:29.905 [2024-07-25 00:59:52.462764] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:33:29.905 [2024-07-25 00:59:52.463031] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:29.905 [2024-07-25 00:59:52.469174] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:33:29.905 [2024-07-25 00:59:52.469297] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:33:29.905 [2024-07-25 00:59:52.469633] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.905 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.163 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:30.163 "name": "raid_bdev1", 00:33:30.163 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:30.163 "strip_size_kb": 64, 00:33:30.163 "state": "online", 00:33:30.163 "raid_level": "raid5f", 00:33:30.163 "superblock": false, 00:33:30.163 "num_base_bdevs": 3, 00:33:30.163 "num_base_bdevs_discovered": 3, 00:33:30.163 "num_base_bdevs_operational": 3, 00:33:30.163 "base_bdevs_list": [ 00:33:30.163 { 00:33:30.163 "name": "BaseBdev1", 00:33:30.163 "uuid": "3ab61352-9b30-5ab9-9f27-6a0b970ea8ad", 00:33:30.163 "is_configured": true, 00:33:30.163 "data_offset": 0, 00:33:30.163 "data_size": 65536 00:33:30.163 }, 00:33:30.163 { 00:33:30.163 "name": "BaseBdev2", 00:33:30.163 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:30.163 "is_configured": true, 00:33:30.163 "data_offset": 0, 00:33:30.163 "data_size": 65536 00:33:30.163 }, 00:33:30.163 { 00:33:30.163 "name": "BaseBdev3", 00:33:30.163 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:30.163 "is_configured": true, 00:33:30.163 "data_offset": 0, 00:33:30.163 "data_size": 65536 00:33:30.163 } 00:33:30.163 ] 00:33:30.163 }' 00:33:30.163 00:59:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:30.163 00:59:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.730 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:30.730 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:33:30.730 [2024-07-25 00:59:53.356210] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:30.730 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=131072 00:33:30.730 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:30.730 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:31.294 [2024-07-25 00:59:53.816171] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:31.294 /dev/nbd0 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:33:31.294 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:31.295 1+0 records in 00:33:31.295 1+0 records out 00:33:31.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363296 s, 11.3 MB/s 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 128 00:33:31.295 00:59:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:33:31.861 512+0 records in 00:33:31.861 512+0 records out 00:33:31.861 67108864 bytes (67 MB, 64 MiB) copied, 0.412027 s, 163 MB/s 00:33:31.861 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:31.861 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:31.861 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:31.861 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:31.861 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:31.861 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:31.861 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:31.861 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:32.120 [2024-07-25 00:59:54.518515] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:32.120 [2024-07-25 00:59:54.694214] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:32.120 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:32.121 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.379 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:32.379 "name": "raid_bdev1", 00:33:32.379 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:32.379 "strip_size_kb": 64, 00:33:32.379 "state": "online", 00:33:32.379 "raid_level": "raid5f", 00:33:32.379 "superblock": false, 00:33:32.379 "num_base_bdevs": 3, 00:33:32.379 "num_base_bdevs_discovered": 2, 00:33:32.379 "num_base_bdevs_operational": 2, 00:33:32.379 "base_bdevs_list": [ 00:33:32.379 { 00:33:32.379 "name": null, 00:33:32.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.379 "is_configured": false, 00:33:32.379 "data_offset": 0, 00:33:32.379 "data_size": 65536 00:33:32.379 }, 00:33:32.379 { 00:33:32.379 "name": "BaseBdev2", 00:33:32.379 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:32.379 "is_configured": true, 00:33:32.379 "data_offset": 0, 00:33:32.379 "data_size": 65536 00:33:32.379 }, 00:33:32.379 { 00:33:32.379 "name": "BaseBdev3", 00:33:32.379 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:32.379 "is_configured": true, 00:33:32.379 "data_offset": 0, 00:33:32.379 "data_size": 65536 00:33:32.379 } 00:33:32.379 ] 00:33:32.379 }' 00:33:32.379 00:59:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:32.379 00:59:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.946 00:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:33.205 [2024-07-25 00:59:55.750486] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:33.205 [2024-07-25 00:59:55.765885] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:33:33.205 [2024-07-25 00:59:55.773749] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:33.205 00:59:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:34.141 00:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:34.141 00:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:34.141 00:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:34.141 00:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:34.141 00:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:34.141 00:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.141 00:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.400 00:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:34.400 "name": "raid_bdev1", 00:33:34.400 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:34.400 "strip_size_kb": 64, 00:33:34.400 "state": "online", 00:33:34.400 "raid_level": "raid5f", 00:33:34.400 "superblock": false, 00:33:34.400 "num_base_bdevs": 3, 00:33:34.400 "num_base_bdevs_discovered": 3, 00:33:34.400 "num_base_bdevs_operational": 3, 00:33:34.400 "process": { 00:33:34.400 "type": "rebuild", 00:33:34.400 "target": "spare", 00:33:34.400 "progress": { 00:33:34.400 "blocks": 22528, 00:33:34.400 "percent": 17 00:33:34.400 } 00:33:34.400 }, 00:33:34.400 "base_bdevs_list": [ 00:33:34.400 { 00:33:34.400 "name": "spare", 00:33:34.400 "uuid": "aab78f71-8fc9-5ecd-9a90-70c66b211d11", 00:33:34.400 "is_configured": true, 00:33:34.400 "data_offset": 0, 00:33:34.400 "data_size": 65536 00:33:34.400 }, 00:33:34.400 { 00:33:34.400 "name": "BaseBdev2", 00:33:34.400 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:34.400 "is_configured": true, 00:33:34.400 "data_offset": 0, 00:33:34.400 "data_size": 65536 00:33:34.400 }, 00:33:34.400 { 00:33:34.400 "name": "BaseBdev3", 00:33:34.400 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:34.400 "is_configured": true, 00:33:34.400 "data_offset": 0, 00:33:34.400 "data_size": 65536 00:33:34.400 } 00:33:34.400 ] 00:33:34.400 }' 00:33:34.400 00:59:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:34.400 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:34.400 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:34.659 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:34.659 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:34.659 [2024-07-25 00:59:57.247377] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:34.659 [2024-07-25 00:59:57.287577] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:34.659 [2024-07-25 00:59:57.287764] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:34.659 [2024-07-25 00:59:57.287812] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:34.659 [2024-07-25 00:59:57.287887] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.918 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.177 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:35.177 "name": "raid_bdev1", 00:33:35.177 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:35.177 "strip_size_kb": 64, 00:33:35.177 "state": "online", 00:33:35.177 "raid_level": "raid5f", 00:33:35.177 "superblock": false, 00:33:35.177 "num_base_bdevs": 3, 00:33:35.177 "num_base_bdevs_discovered": 2, 00:33:35.177 "num_base_bdevs_operational": 2, 00:33:35.177 "base_bdevs_list": [ 00:33:35.177 { 00:33:35.177 "name": null, 00:33:35.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.177 "is_configured": false, 00:33:35.177 "data_offset": 0, 00:33:35.177 "data_size": 65536 00:33:35.177 }, 00:33:35.177 { 00:33:35.177 "name": "BaseBdev2", 00:33:35.177 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:35.177 "is_configured": true, 00:33:35.177 "data_offset": 0, 00:33:35.177 "data_size": 65536 00:33:35.177 }, 00:33:35.177 { 00:33:35.177 "name": "BaseBdev3", 00:33:35.177 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:35.177 "is_configured": true, 00:33:35.177 "data_offset": 0, 00:33:35.177 "data_size": 65536 00:33:35.177 } 00:33:35.177 ] 00:33:35.177 }' 00:33:35.177 00:59:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:35.177 00:59:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.744 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:35.744 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:35.744 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:35.744 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:35.744 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:35.744 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.744 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:35.744 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:35.744 "name": "raid_bdev1", 00:33:35.745 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:35.745 "strip_size_kb": 64, 00:33:35.745 "state": "online", 00:33:35.745 "raid_level": "raid5f", 00:33:35.745 "superblock": false, 00:33:35.745 "num_base_bdevs": 3, 00:33:35.745 "num_base_bdevs_discovered": 2, 00:33:35.745 "num_base_bdevs_operational": 2, 00:33:35.745 "base_bdevs_list": [ 00:33:35.745 { 00:33:35.745 "name": null, 00:33:35.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.745 "is_configured": false, 00:33:35.745 "data_offset": 0, 00:33:35.745 "data_size": 65536 00:33:35.745 }, 00:33:35.745 { 00:33:35.745 "name": "BaseBdev2", 00:33:35.745 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:35.745 "is_configured": true, 00:33:35.745 "data_offset": 0, 00:33:35.745 "data_size": 65536 00:33:35.745 }, 00:33:35.745 { 00:33:35.745 "name": "BaseBdev3", 00:33:35.745 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:35.745 "is_configured": true, 00:33:35.745 "data_offset": 0, 00:33:35.745 "data_size": 65536 00:33:35.745 } 00:33:35.745 ] 00:33:35.745 }' 00:33:35.745 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:36.039 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:36.039 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:36.039 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:36.039 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:36.298 [2024-07-25 00:59:58.741885] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:36.298 [2024-07-25 00:59:58.757858] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:33:36.298 [2024-07-25 00:59:58.766193] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:36.298 00:59:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:37.235 00:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:37.235 00:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:37.235 00:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:37.235 00:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:37.235 00:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:37.235 00:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.235 00:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.494 00:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:37.494 "name": "raid_bdev1", 00:33:37.494 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:37.494 "strip_size_kb": 64, 00:33:37.494 "state": "online", 00:33:37.494 "raid_level": "raid5f", 00:33:37.494 "superblock": false, 00:33:37.494 "num_base_bdevs": 3, 00:33:37.494 "num_base_bdevs_discovered": 3, 00:33:37.494 "num_base_bdevs_operational": 3, 00:33:37.494 "process": { 00:33:37.494 "type": "rebuild", 00:33:37.494 "target": "spare", 00:33:37.494 "progress": { 00:33:37.494 "blocks": 24576, 00:33:37.494 "percent": 18 00:33:37.494 } 00:33:37.494 }, 00:33:37.494 "base_bdevs_list": [ 00:33:37.494 { 00:33:37.494 "name": "spare", 00:33:37.494 "uuid": "aab78f71-8fc9-5ecd-9a90-70c66b211d11", 00:33:37.494 "is_configured": true, 00:33:37.494 "data_offset": 0, 00:33:37.494 "data_size": 65536 00:33:37.494 }, 00:33:37.494 { 00:33:37.494 "name": "BaseBdev2", 00:33:37.494 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:37.494 "is_configured": true, 00:33:37.494 "data_offset": 0, 00:33:37.494 "data_size": 65536 00:33:37.494 }, 00:33:37.494 { 00:33:37.494 "name": "BaseBdev3", 00:33:37.494 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:37.494 "is_configured": true, 00:33:37.494 "data_offset": 0, 00:33:37.494 "data_size": 65536 00:33:37.494 } 00:33:37.494 ] 00:33:37.494 }' 00:33:37.494 00:59:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1077 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.494 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.753 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:37.753 "name": "raid_bdev1", 00:33:37.753 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:37.753 "strip_size_kb": 64, 00:33:37.753 "state": "online", 00:33:37.753 "raid_level": "raid5f", 00:33:37.753 "superblock": false, 00:33:37.753 "num_base_bdevs": 3, 00:33:37.753 "num_base_bdevs_discovered": 3, 00:33:37.753 "num_base_bdevs_operational": 3, 00:33:37.753 "process": { 00:33:37.753 "type": "rebuild", 00:33:37.753 "target": "spare", 00:33:37.753 "progress": { 00:33:37.753 "blocks": 30720, 00:33:37.753 "percent": 23 00:33:37.753 } 00:33:37.753 }, 00:33:37.753 "base_bdevs_list": [ 00:33:37.753 { 00:33:37.753 "name": "spare", 00:33:37.753 "uuid": "aab78f71-8fc9-5ecd-9a90-70c66b211d11", 00:33:37.753 "is_configured": true, 00:33:37.753 "data_offset": 0, 00:33:37.753 "data_size": 65536 00:33:37.753 }, 00:33:37.753 { 00:33:37.753 "name": "BaseBdev2", 00:33:37.753 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:37.753 "is_configured": true, 00:33:37.753 "data_offset": 0, 00:33:37.753 "data_size": 65536 00:33:37.753 }, 00:33:37.753 { 00:33:37.753 "name": "BaseBdev3", 00:33:37.753 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:37.753 "is_configured": true, 00:33:37.753 "data_offset": 0, 00:33:37.753 "data_size": 65536 00:33:37.753 } 00:33:37.753 ] 00:33:37.753 }' 00:33:37.753 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:37.753 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:37.753 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:38.011 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:38.011 01:00:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:38.947 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:38.947 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:38.947 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:38.947 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:38.947 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:38.947 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:38.947 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.947 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.205 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:39.205 "name": "raid_bdev1", 00:33:39.205 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:39.205 "strip_size_kb": 64, 00:33:39.205 "state": "online", 00:33:39.205 "raid_level": "raid5f", 00:33:39.205 "superblock": false, 00:33:39.205 "num_base_bdevs": 3, 00:33:39.205 "num_base_bdevs_discovered": 3, 00:33:39.205 "num_base_bdevs_operational": 3, 00:33:39.205 "process": { 00:33:39.205 "type": "rebuild", 00:33:39.205 "target": "spare", 00:33:39.205 "progress": { 00:33:39.205 "blocks": 57344, 00:33:39.205 "percent": 43 00:33:39.205 } 00:33:39.205 }, 00:33:39.205 "base_bdevs_list": [ 00:33:39.205 { 00:33:39.205 "name": "spare", 00:33:39.205 "uuid": "aab78f71-8fc9-5ecd-9a90-70c66b211d11", 00:33:39.205 "is_configured": true, 00:33:39.205 "data_offset": 0, 00:33:39.205 "data_size": 65536 00:33:39.205 }, 00:33:39.205 { 00:33:39.205 "name": "BaseBdev2", 00:33:39.205 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:39.205 "is_configured": true, 00:33:39.205 "data_offset": 0, 00:33:39.205 "data_size": 65536 00:33:39.205 }, 00:33:39.205 { 00:33:39.205 "name": "BaseBdev3", 00:33:39.205 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:39.205 "is_configured": true, 00:33:39.205 "data_offset": 0, 00:33:39.205 "data_size": 65536 00:33:39.205 } 00:33:39.205 ] 00:33:39.205 }' 00:33:39.205 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:39.205 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:39.205 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:39.205 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:39.205 01:00:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:40.137 01:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:40.137 01:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:40.137 01:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:40.137 01:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:40.137 01:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:40.137 01:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:40.137 01:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.137 01:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.394 01:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:40.394 "name": "raid_bdev1", 00:33:40.394 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:40.394 "strip_size_kb": 64, 00:33:40.394 "state": "online", 00:33:40.394 "raid_level": "raid5f", 00:33:40.394 "superblock": false, 00:33:40.394 "num_base_bdevs": 3, 00:33:40.394 "num_base_bdevs_discovered": 3, 00:33:40.394 "num_base_bdevs_operational": 3, 00:33:40.394 "process": { 00:33:40.394 "type": "rebuild", 00:33:40.394 "target": "spare", 00:33:40.394 "progress": { 00:33:40.394 "blocks": 83968, 00:33:40.394 "percent": 64 00:33:40.394 } 00:33:40.394 }, 00:33:40.394 "base_bdevs_list": [ 00:33:40.394 { 00:33:40.394 "name": "spare", 00:33:40.394 "uuid": "aab78f71-8fc9-5ecd-9a90-70c66b211d11", 00:33:40.394 "is_configured": true, 00:33:40.394 "data_offset": 0, 00:33:40.394 "data_size": 65536 00:33:40.394 }, 00:33:40.394 { 00:33:40.394 "name": "BaseBdev2", 00:33:40.394 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:40.394 "is_configured": true, 00:33:40.394 "data_offset": 0, 00:33:40.394 "data_size": 65536 00:33:40.394 }, 00:33:40.394 { 00:33:40.394 "name": "BaseBdev3", 00:33:40.394 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:40.394 "is_configured": true, 00:33:40.394 "data_offset": 0, 00:33:40.394 "data_size": 65536 00:33:40.394 } 00:33:40.394 ] 00:33:40.394 }' 00:33:40.394 01:00:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:40.394 01:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:40.394 01:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:40.653 01:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:40.653 01:00:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:41.599 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:41.599 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:41.599 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:41.599 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:41.599 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:41.599 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:41.599 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.599 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:41.857 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:41.857 "name": "raid_bdev1", 00:33:41.857 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:41.857 "strip_size_kb": 64, 00:33:41.857 "state": "online", 00:33:41.857 "raid_level": "raid5f", 00:33:41.857 "superblock": false, 00:33:41.857 "num_base_bdevs": 3, 00:33:41.857 "num_base_bdevs_discovered": 3, 00:33:41.857 "num_base_bdevs_operational": 3, 00:33:41.857 "process": { 00:33:41.857 "type": "rebuild", 00:33:41.857 "target": "spare", 00:33:41.857 "progress": { 00:33:41.857 "blocks": 110592, 00:33:41.857 "percent": 84 00:33:41.857 } 00:33:41.857 }, 00:33:41.857 "base_bdevs_list": [ 00:33:41.857 { 00:33:41.857 "name": "spare", 00:33:41.857 "uuid": "aab78f71-8fc9-5ecd-9a90-70c66b211d11", 00:33:41.857 "is_configured": true, 00:33:41.857 "data_offset": 0, 00:33:41.857 "data_size": 65536 00:33:41.857 }, 00:33:41.857 { 00:33:41.857 "name": "BaseBdev2", 00:33:41.857 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:41.857 "is_configured": true, 00:33:41.857 "data_offset": 0, 00:33:41.857 "data_size": 65536 00:33:41.857 }, 00:33:41.857 { 00:33:41.857 "name": "BaseBdev3", 00:33:41.857 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:41.857 "is_configured": true, 00:33:41.857 "data_offset": 0, 00:33:41.857 "data_size": 65536 00:33:41.857 } 00:33:41.857 ] 00:33:41.857 }' 00:33:41.857 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:41.857 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:41.857 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:41.857 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:41.857 01:00:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:42.787 [2024-07-25 01:00:05.220995] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:42.787 [2024-07-25 01:00:05.221220] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:42.787 [2024-07-25 01:00:05.221364] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:42.787 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:42.787 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:42.787 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:42.787 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:42.787 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:42.787 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:42.787 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.787 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.044 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:43.044 "name": "raid_bdev1", 00:33:43.044 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:43.044 "strip_size_kb": 64, 00:33:43.044 "state": "online", 00:33:43.044 "raid_level": "raid5f", 00:33:43.044 "superblock": false, 00:33:43.044 "num_base_bdevs": 3, 00:33:43.044 "num_base_bdevs_discovered": 3, 00:33:43.044 "num_base_bdevs_operational": 3, 00:33:43.044 "base_bdevs_list": [ 00:33:43.044 { 00:33:43.044 "name": "spare", 00:33:43.044 "uuid": "aab78f71-8fc9-5ecd-9a90-70c66b211d11", 00:33:43.044 "is_configured": true, 00:33:43.044 "data_offset": 0, 00:33:43.044 "data_size": 65536 00:33:43.044 }, 00:33:43.044 { 00:33:43.044 "name": "BaseBdev2", 00:33:43.044 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:43.044 "is_configured": true, 00:33:43.044 "data_offset": 0, 00:33:43.044 "data_size": 65536 00:33:43.044 }, 00:33:43.044 { 00:33:43.044 "name": "BaseBdev3", 00:33:43.044 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:43.044 "is_configured": true, 00:33:43.044 "data_offset": 0, 00:33:43.044 "data_size": 65536 00:33:43.044 } 00:33:43.044 ] 00:33:43.044 }' 00:33:43.044 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:43.301 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:43.301 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:43.301 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:43.302 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:33:43.302 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:43.302 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:43.302 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:43.302 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:43.302 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:43.302 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.302 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.559 01:00:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:43.559 "name": "raid_bdev1", 00:33:43.559 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:43.559 "strip_size_kb": 64, 00:33:43.559 "state": "online", 00:33:43.559 "raid_level": "raid5f", 00:33:43.559 "superblock": false, 00:33:43.559 "num_base_bdevs": 3, 00:33:43.559 "num_base_bdevs_discovered": 3, 00:33:43.559 "num_base_bdevs_operational": 3, 00:33:43.559 "base_bdevs_list": [ 00:33:43.559 { 00:33:43.559 "name": "spare", 00:33:43.559 "uuid": "aab78f71-8fc9-5ecd-9a90-70c66b211d11", 00:33:43.559 "is_configured": true, 00:33:43.559 "data_offset": 0, 00:33:43.559 "data_size": 65536 00:33:43.559 }, 00:33:43.559 { 00:33:43.559 "name": "BaseBdev2", 00:33:43.559 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:43.559 "is_configured": true, 00:33:43.559 "data_offset": 0, 00:33:43.559 "data_size": 65536 00:33:43.559 }, 00:33:43.559 { 00:33:43.559 "name": "BaseBdev3", 00:33:43.559 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:43.559 "is_configured": true, 00:33:43.559 "data_offset": 0, 00:33:43.559 "data_size": 65536 00:33:43.559 } 00:33:43.559 ] 00:33:43.559 }' 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.559 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.821 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:43.821 "name": "raid_bdev1", 00:33:43.821 "uuid": "9561c3e9-cd40-4f39-8c76-20ed3a8ab279", 00:33:43.821 "strip_size_kb": 64, 00:33:43.821 "state": "online", 00:33:43.821 "raid_level": "raid5f", 00:33:43.821 "superblock": false, 00:33:43.821 "num_base_bdevs": 3, 00:33:43.821 "num_base_bdevs_discovered": 3, 00:33:43.821 "num_base_bdevs_operational": 3, 00:33:43.821 "base_bdevs_list": [ 00:33:43.821 { 00:33:43.821 "name": "spare", 00:33:43.821 "uuid": "aab78f71-8fc9-5ecd-9a90-70c66b211d11", 00:33:43.821 "is_configured": true, 00:33:43.821 "data_offset": 0, 00:33:43.821 "data_size": 65536 00:33:43.821 }, 00:33:43.821 { 00:33:43.821 "name": "BaseBdev2", 00:33:43.821 "uuid": "3a9e5c00-bf0d-5e51-9294-f051d0f2bf61", 00:33:43.821 "is_configured": true, 00:33:43.821 "data_offset": 0, 00:33:43.821 "data_size": 65536 00:33:43.821 }, 00:33:43.821 { 00:33:43.821 "name": "BaseBdev3", 00:33:43.821 "uuid": "2e300972-4dbc-5456-8337-23b677cbfcf3", 00:33:43.821 "is_configured": true, 00:33:43.821 "data_offset": 0, 00:33:43.821 "data_size": 65536 00:33:43.821 } 00:33:43.821 ] 00:33:43.821 }' 00:33:43.821 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:43.821 01:00:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.389 01:00:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:44.645 [2024-07-25 01:00:07.117602] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:44.646 [2024-07-25 01:00:07.117795] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:44.646 [2024-07-25 01:00:07.117999] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:44.646 [2024-07-25 01:00:07.118172] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:44.646 [2024-07-25 01:00:07.118286] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:33:44.646 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.646 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:44.902 /dev/nbd0 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:44.902 1+0 records in 00:33:44.902 1+0 records out 00:33:44.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406247 s, 10.1 MB/s 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:44.902 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:33:45.467 /dev/nbd1 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:45.467 1+0 records in 00:33:45.467 1+0 records out 00:33:45.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556943 s, 7.4 MB/s 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:45.467 01:00:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:45.467 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:33:45.467 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:45.467 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:45.467 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:45.467 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:45.467 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:45.467 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:45.724 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:45.724 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:45.724 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:45.724 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:45.724 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:45.724 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:45.724 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:45.724 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:45.724 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:45.724 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 153048 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 153048 ']' 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 153048 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 153048 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 153048' 00:33:45.983 killing process with pid 153048 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 153048 00:33:45.983 Received shutdown signal, test time was about 60.000000 seconds 00:33:45.983 00:33:45.983 Latency(us) 00:33:45.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.983 =================================================================================================================== 00:33:45.983 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:45.983 [2024-07-25 01:00:08.571136] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:45.983 01:00:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 153048 00:33:46.550 [2024-07-25 01:00:08.942684] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:47.520 ************************************ 00:33:47.520 END TEST raid5f_rebuild_test 00:33:47.520 ************************************ 00:33:47.520 01:00:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:33:47.520 00:33:47.520 real 0m20.763s 00:33:47.520 user 0m30.193s 00:33:47.520 sys 0m2.980s 00:33:47.520 01:00:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:47.520 01:00:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.778 01:00:10 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:33:47.778 01:00:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:47.778 01:00:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:47.778 01:00:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:47.778 ************************************ 00:33:47.778 START TEST raid5f_rebuild_test_sb 00:33:47.778 ************************************ 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 true false true 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=153591 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 153591 /var/tmp/spdk-raid.sock 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 153591 ']' 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:47.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:47.778 01:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:47.778 [2024-07-25 01:00:10.321635] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:33:47.778 [2024-07-25 01:00:10.321933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153591 ] 00:33:47.778 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:47.778 Zero copy mechanism will not be used. 00:33:48.038 [2024-07-25 01:00:10.480686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.038 [2024-07-25 01:00:10.679322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.303 [2024-07-25 01:00:10.896094] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:48.877 01:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:48.877 01:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:33:48.877 01:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:48.877 01:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:48.877 BaseBdev1_malloc 00:33:49.135 01:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:49.135 [2024-07-25 01:00:11.754598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:49.135 [2024-07-25 01:00:11.754841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:49.135 [2024-07-25 01:00:11.754916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:33:49.135 [2024-07-25 01:00:11.755144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:49.135 [2024-07-25 01:00:11.757457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:49.135 [2024-07-25 01:00:11.757603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:49.135 BaseBdev1 00:33:49.135 01:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:49.135 01:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:49.701 BaseBdev2_malloc 00:33:49.701 01:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:49.701 [2024-07-25 01:00:12.306378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:49.701 [2024-07-25 01:00:12.306651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:49.701 [2024-07-25 01:00:12.306722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:33:49.701 [2024-07-25 01:00:12.306817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:49.701 [2024-07-25 01:00:12.309080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:49.701 [2024-07-25 01:00:12.309231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:49.701 BaseBdev2 00:33:49.701 01:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:49.701 01:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:49.959 BaseBdev3_malloc 00:33:49.959 01:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:50.218 [2024-07-25 01:00:12.702898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:50.218 [2024-07-25 01:00:12.703145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:50.218 [2024-07-25 01:00:12.703213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:50.218 [2024-07-25 01:00:12.703304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:50.218 [2024-07-25 01:00:12.705555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:50.218 [2024-07-25 01:00:12.705715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:50.218 BaseBdev3 00:33:50.218 01:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:33:50.475 spare_malloc 00:33:50.475 01:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:50.732 spare_delay 00:33:50.732 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:50.732 [2024-07-25 01:00:13.344712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:50.732 [2024-07-25 01:00:13.344947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:50.732 [2024-07-25 01:00:13.345017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:50.732 [2024-07-25 01:00:13.345119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:50.732 [2024-07-25 01:00:13.347401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:50.732 [2024-07-25 01:00:13.347555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:50.732 spare 00:33:50.732 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:33:50.989 [2024-07-25 01:00:13.580818] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:50.989 [2024-07-25 01:00:13.582901] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:50.989 [2024-07-25 01:00:13.583090] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:50.989 [2024-07-25 01:00:13.583322] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:33:50.989 [2024-07-25 01:00:13.583533] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:50.989 [2024-07-25 01:00:13.583720] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:50.989 [2024-07-25 01:00:13.589634] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:33:50.989 [2024-07-25 01:00:13.589746] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:33:50.989 [2024-07-25 01:00:13.590010] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.989 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.247 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:51.247 "name": "raid_bdev1", 00:33:51.247 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:33:51.247 "strip_size_kb": 64, 00:33:51.247 "state": "online", 00:33:51.247 "raid_level": "raid5f", 00:33:51.247 "superblock": true, 00:33:51.247 "num_base_bdevs": 3, 00:33:51.247 "num_base_bdevs_discovered": 3, 00:33:51.247 "num_base_bdevs_operational": 3, 00:33:51.247 "base_bdevs_list": [ 00:33:51.247 { 00:33:51.247 "name": "BaseBdev1", 00:33:51.247 "uuid": "4da8b8e8-52ab-5294-a199-dfdf20137ed5", 00:33:51.247 "is_configured": true, 00:33:51.247 "data_offset": 2048, 00:33:51.247 "data_size": 63488 00:33:51.247 }, 00:33:51.247 { 00:33:51.247 "name": "BaseBdev2", 00:33:51.247 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:33:51.247 "is_configured": true, 00:33:51.247 "data_offset": 2048, 00:33:51.247 "data_size": 63488 00:33:51.247 }, 00:33:51.247 { 00:33:51.247 "name": "BaseBdev3", 00:33:51.247 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:33:51.247 "is_configured": true, 00:33:51.247 "data_offset": 2048, 00:33:51.247 "data_size": 63488 00:33:51.247 } 00:33:51.247 ] 00:33:51.247 }' 00:33:51.247 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:51.247 01:00:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.811 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:51.811 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:33:52.069 [2024-07-25 01:00:14.484121] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=126976 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.069 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:52.327 [2024-07-25 01:00:14.920117] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:52.327 /dev/nbd0 00:33:52.327 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:52.585 1+0 records in 00:33:52.585 1+0 records out 00:33:52.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328725 s, 12.5 MB/s 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 128 00:33:52.585 01:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:33:52.843 496+0 records in 00:33:52.843 496+0 records out 00:33:52.843 65011712 bytes (65 MB, 62 MiB) copied, 0.392658 s, 166 MB/s 00:33:52.843 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:52.843 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:52.843 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:52.843 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:52.843 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:33:52.843 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:52.843 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:53.101 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:53.101 [2024-07-25 01:00:15.600175] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:53.101 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:53.101 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:53.101 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:53.101 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:53.101 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:53.101 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:53.101 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:53.101 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:53.358 [2024-07-25 01:00:15.783362] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.358 01:00:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.615 01:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:53.615 "name": "raid_bdev1", 00:33:53.615 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:33:53.615 "strip_size_kb": 64, 00:33:53.615 "state": "online", 00:33:53.615 "raid_level": "raid5f", 00:33:53.615 "superblock": true, 00:33:53.615 "num_base_bdevs": 3, 00:33:53.615 "num_base_bdevs_discovered": 2, 00:33:53.615 "num_base_bdevs_operational": 2, 00:33:53.615 "base_bdevs_list": [ 00:33:53.615 { 00:33:53.615 "name": null, 00:33:53.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:53.615 "is_configured": false, 00:33:53.615 "data_offset": 2048, 00:33:53.615 "data_size": 63488 00:33:53.615 }, 00:33:53.615 { 00:33:53.615 "name": "BaseBdev2", 00:33:53.615 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:33:53.615 "is_configured": true, 00:33:53.615 "data_offset": 2048, 00:33:53.615 "data_size": 63488 00:33:53.615 }, 00:33:53.615 { 00:33:53.615 "name": "BaseBdev3", 00:33:53.615 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:33:53.615 "is_configured": true, 00:33:53.615 "data_offset": 2048, 00:33:53.615 "data_size": 63488 00:33:53.615 } 00:33:53.615 ] 00:33:53.615 }' 00:33:53.615 01:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:53.615 01:00:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.179 01:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:54.437 [2024-07-25 01:00:16.859579] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:54.437 [2024-07-25 01:00:16.876028] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:33:54.437 [2024-07-25 01:00:16.884078] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:54.437 01:00:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:55.367 01:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:55.367 01:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:55.367 01:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:55.367 01:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:55.367 01:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:55.367 01:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.367 01:00:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.625 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:55.625 "name": "raid_bdev1", 00:33:55.625 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:33:55.625 "strip_size_kb": 64, 00:33:55.625 "state": "online", 00:33:55.625 "raid_level": "raid5f", 00:33:55.625 "superblock": true, 00:33:55.625 "num_base_bdevs": 3, 00:33:55.625 "num_base_bdevs_discovered": 3, 00:33:55.625 "num_base_bdevs_operational": 3, 00:33:55.625 "process": { 00:33:55.625 "type": "rebuild", 00:33:55.625 "target": "spare", 00:33:55.625 "progress": { 00:33:55.625 "blocks": 24576, 00:33:55.625 "percent": 19 00:33:55.625 } 00:33:55.625 }, 00:33:55.625 "base_bdevs_list": [ 00:33:55.625 { 00:33:55.625 "name": "spare", 00:33:55.625 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:33:55.625 "is_configured": true, 00:33:55.625 "data_offset": 2048, 00:33:55.625 "data_size": 63488 00:33:55.625 }, 00:33:55.625 { 00:33:55.625 "name": "BaseBdev2", 00:33:55.625 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:33:55.625 "is_configured": true, 00:33:55.625 "data_offset": 2048, 00:33:55.625 "data_size": 63488 00:33:55.625 }, 00:33:55.625 { 00:33:55.625 "name": "BaseBdev3", 00:33:55.625 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:33:55.625 "is_configured": true, 00:33:55.625 "data_offset": 2048, 00:33:55.625 "data_size": 63488 00:33:55.625 } 00:33:55.625 ] 00:33:55.625 }' 00:33:55.625 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:55.625 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:55.625 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:55.625 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:55.625 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:55.882 [2024-07-25 01:00:18.490209] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:55.882 [2024-07-25 01:00:18.499244] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:55.882 [2024-07-25 01:00:18.499812] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:55.882 [2024-07-25 01:00:18.499948] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:55.882 [2024-07-25 01:00:18.499992] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.140 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.399 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:56.399 "name": "raid_bdev1", 00:33:56.399 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:33:56.399 "strip_size_kb": 64, 00:33:56.399 "state": "online", 00:33:56.399 "raid_level": "raid5f", 00:33:56.399 "superblock": true, 00:33:56.399 "num_base_bdevs": 3, 00:33:56.399 "num_base_bdevs_discovered": 2, 00:33:56.399 "num_base_bdevs_operational": 2, 00:33:56.399 "base_bdevs_list": [ 00:33:56.399 { 00:33:56.399 "name": null, 00:33:56.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.399 "is_configured": false, 00:33:56.399 "data_offset": 2048, 00:33:56.399 "data_size": 63488 00:33:56.399 }, 00:33:56.399 { 00:33:56.399 "name": "BaseBdev2", 00:33:56.399 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:33:56.399 "is_configured": true, 00:33:56.399 "data_offset": 2048, 00:33:56.399 "data_size": 63488 00:33:56.399 }, 00:33:56.399 { 00:33:56.399 "name": "BaseBdev3", 00:33:56.399 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:33:56.399 "is_configured": true, 00:33:56.399 "data_offset": 2048, 00:33:56.399 "data_size": 63488 00:33:56.399 } 00:33:56.399 ] 00:33:56.399 }' 00:33:56.399 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:56.399 01:00:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:56.965 "name": "raid_bdev1", 00:33:56.965 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:33:56.965 "strip_size_kb": 64, 00:33:56.965 "state": "online", 00:33:56.965 "raid_level": "raid5f", 00:33:56.965 "superblock": true, 00:33:56.965 "num_base_bdevs": 3, 00:33:56.965 "num_base_bdevs_discovered": 2, 00:33:56.965 "num_base_bdevs_operational": 2, 00:33:56.965 "base_bdevs_list": [ 00:33:56.965 { 00:33:56.965 "name": null, 00:33:56.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.965 "is_configured": false, 00:33:56.965 "data_offset": 2048, 00:33:56.965 "data_size": 63488 00:33:56.965 }, 00:33:56.965 { 00:33:56.965 "name": "BaseBdev2", 00:33:56.965 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:33:56.965 "is_configured": true, 00:33:56.965 "data_offset": 2048, 00:33:56.965 "data_size": 63488 00:33:56.965 }, 00:33:56.965 { 00:33:56.965 "name": "BaseBdev3", 00:33:56.965 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:33:56.965 "is_configured": true, 00:33:56.965 "data_offset": 2048, 00:33:56.965 "data_size": 63488 00:33:56.965 } 00:33:56.965 ] 00:33:56.965 }' 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:56.965 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:56.966 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:57.228 [2024-07-25 01:00:19.828538] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:57.228 [2024-07-25 01:00:19.843837] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:33:57.228 [2024-07-25 01:00:19.851412] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:57.228 01:00:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:58.614 01:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:58.614 01:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:58.614 01:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:58.614 01:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:58.614 01:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:58.614 01:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.614 01:00:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:58.614 "name": "raid_bdev1", 00:33:58.614 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:33:58.614 "strip_size_kb": 64, 00:33:58.614 "state": "online", 00:33:58.614 "raid_level": "raid5f", 00:33:58.614 "superblock": true, 00:33:58.614 "num_base_bdevs": 3, 00:33:58.614 "num_base_bdevs_discovered": 3, 00:33:58.614 "num_base_bdevs_operational": 3, 00:33:58.614 "process": { 00:33:58.614 "type": "rebuild", 00:33:58.614 "target": "spare", 00:33:58.614 "progress": { 00:33:58.614 "blocks": 24576, 00:33:58.614 "percent": 19 00:33:58.614 } 00:33:58.614 }, 00:33:58.614 "base_bdevs_list": [ 00:33:58.614 { 00:33:58.614 "name": "spare", 00:33:58.614 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:33:58.614 "is_configured": true, 00:33:58.614 "data_offset": 2048, 00:33:58.614 "data_size": 63488 00:33:58.614 }, 00:33:58.614 { 00:33:58.614 "name": "BaseBdev2", 00:33:58.614 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:33:58.614 "is_configured": true, 00:33:58.614 "data_offset": 2048, 00:33:58.614 "data_size": 63488 00:33:58.614 }, 00:33:58.614 { 00:33:58.614 "name": "BaseBdev3", 00:33:58.614 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:33:58.614 "is_configured": true, 00:33:58.614 "data_offset": 2048, 00:33:58.614 "data_size": 63488 00:33:58.614 } 00:33:58.614 ] 00:33:58.614 }' 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:33:58.614 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1098 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.614 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:58.873 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:58.873 "name": "raid_bdev1", 00:33:58.873 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:33:58.873 "strip_size_kb": 64, 00:33:58.873 "state": "online", 00:33:58.873 "raid_level": "raid5f", 00:33:58.873 "superblock": true, 00:33:58.873 "num_base_bdevs": 3, 00:33:58.873 "num_base_bdevs_discovered": 3, 00:33:58.873 "num_base_bdevs_operational": 3, 00:33:58.873 "process": { 00:33:58.873 "type": "rebuild", 00:33:58.873 "target": "spare", 00:33:58.873 "progress": { 00:33:58.873 "blocks": 30720, 00:33:58.873 "percent": 24 00:33:58.873 } 00:33:58.873 }, 00:33:58.873 "base_bdevs_list": [ 00:33:58.873 { 00:33:58.873 "name": "spare", 00:33:58.873 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:33:58.873 "is_configured": true, 00:33:58.873 "data_offset": 2048, 00:33:58.873 "data_size": 63488 00:33:58.873 }, 00:33:58.873 { 00:33:58.873 "name": "BaseBdev2", 00:33:58.873 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:33:58.873 "is_configured": true, 00:33:58.873 "data_offset": 2048, 00:33:58.873 "data_size": 63488 00:33:58.873 }, 00:33:58.873 { 00:33:58.873 "name": "BaseBdev3", 00:33:58.873 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:33:58.873 "is_configured": true, 00:33:58.873 "data_offset": 2048, 00:33:58.873 "data_size": 63488 00:33:58.873 } 00:33:58.873 ] 00:33:58.873 }' 00:33:58.873 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:58.873 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:58.873 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:59.132 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:59.132 01:00:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:00.070 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:00.070 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:00.070 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:00.070 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:00.070 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:00.070 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:00.070 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:00.070 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:00.329 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:00.329 "name": "raid_bdev1", 00:34:00.329 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:00.329 "strip_size_kb": 64, 00:34:00.329 "state": "online", 00:34:00.329 "raid_level": "raid5f", 00:34:00.329 "superblock": true, 00:34:00.329 "num_base_bdevs": 3, 00:34:00.329 "num_base_bdevs_discovered": 3, 00:34:00.329 "num_base_bdevs_operational": 3, 00:34:00.329 "process": { 00:34:00.329 "type": "rebuild", 00:34:00.329 "target": "spare", 00:34:00.329 "progress": { 00:34:00.329 "blocks": 57344, 00:34:00.329 "percent": 45 00:34:00.329 } 00:34:00.329 }, 00:34:00.329 "base_bdevs_list": [ 00:34:00.329 { 00:34:00.329 "name": "spare", 00:34:00.329 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:34:00.329 "is_configured": true, 00:34:00.329 "data_offset": 2048, 00:34:00.329 "data_size": 63488 00:34:00.329 }, 00:34:00.329 { 00:34:00.329 "name": "BaseBdev2", 00:34:00.329 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:00.329 "is_configured": true, 00:34:00.329 "data_offset": 2048, 00:34:00.329 "data_size": 63488 00:34:00.329 }, 00:34:00.329 { 00:34:00.329 "name": "BaseBdev3", 00:34:00.329 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:00.329 "is_configured": true, 00:34:00.329 "data_offset": 2048, 00:34:00.329 "data_size": 63488 00:34:00.329 } 00:34:00.329 ] 00:34:00.329 }' 00:34:00.329 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:00.329 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:00.329 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:00.329 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:00.329 01:00:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:01.264 01:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:01.264 01:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:01.264 01:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:01.264 01:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:01.264 01:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:01.264 01:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:01.264 01:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:01.264 01:00:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.523 01:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:01.524 "name": "raid_bdev1", 00:34:01.524 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:01.524 "strip_size_kb": 64, 00:34:01.524 "state": "online", 00:34:01.524 "raid_level": "raid5f", 00:34:01.524 "superblock": true, 00:34:01.524 "num_base_bdevs": 3, 00:34:01.524 "num_base_bdevs_discovered": 3, 00:34:01.524 "num_base_bdevs_operational": 3, 00:34:01.524 "process": { 00:34:01.524 "type": "rebuild", 00:34:01.524 "target": "spare", 00:34:01.524 "progress": { 00:34:01.524 "blocks": 86016, 00:34:01.524 "percent": 67 00:34:01.524 } 00:34:01.524 }, 00:34:01.524 "base_bdevs_list": [ 00:34:01.524 { 00:34:01.524 "name": "spare", 00:34:01.524 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:34:01.524 "is_configured": true, 00:34:01.524 "data_offset": 2048, 00:34:01.524 "data_size": 63488 00:34:01.524 }, 00:34:01.524 { 00:34:01.524 "name": "BaseBdev2", 00:34:01.524 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:01.524 "is_configured": true, 00:34:01.524 "data_offset": 2048, 00:34:01.524 "data_size": 63488 00:34:01.524 }, 00:34:01.524 { 00:34:01.524 "name": "BaseBdev3", 00:34:01.524 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:01.524 "is_configured": true, 00:34:01.524 "data_offset": 2048, 00:34:01.524 "data_size": 63488 00:34:01.524 } 00:34:01.524 ] 00:34:01.524 }' 00:34:01.524 01:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:01.524 01:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:01.524 01:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:01.781 01:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:01.781 01:00:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:02.714 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:02.714 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:02.714 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:02.714 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:02.714 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:02.714 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:02.714 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.714 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.972 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:02.972 "name": "raid_bdev1", 00:34:02.972 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:02.972 "strip_size_kb": 64, 00:34:02.972 "state": "online", 00:34:02.972 "raid_level": "raid5f", 00:34:02.972 "superblock": true, 00:34:02.972 "num_base_bdevs": 3, 00:34:02.972 "num_base_bdevs_discovered": 3, 00:34:02.972 "num_base_bdevs_operational": 3, 00:34:02.972 "process": { 00:34:02.972 "type": "rebuild", 00:34:02.972 "target": "spare", 00:34:02.972 "progress": { 00:34:02.972 "blocks": 112640, 00:34:02.972 "percent": 88 00:34:02.972 } 00:34:02.972 }, 00:34:02.972 "base_bdevs_list": [ 00:34:02.972 { 00:34:02.972 "name": "spare", 00:34:02.972 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:34:02.972 "is_configured": true, 00:34:02.972 "data_offset": 2048, 00:34:02.972 "data_size": 63488 00:34:02.972 }, 00:34:02.972 { 00:34:02.972 "name": "BaseBdev2", 00:34:02.972 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:02.972 "is_configured": true, 00:34:02.972 "data_offset": 2048, 00:34:02.972 "data_size": 63488 00:34:02.972 }, 00:34:02.972 { 00:34:02.972 "name": "BaseBdev3", 00:34:02.972 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:02.972 "is_configured": true, 00:34:02.972 "data_offset": 2048, 00:34:02.972 "data_size": 63488 00:34:02.972 } 00:34:02.972 ] 00:34:02.972 }' 00:34:02.972 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:02.972 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:02.972 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:02.972 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:02.972 01:00:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:03.538 [2024-07-25 01:00:26.105533] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:03.538 [2024-07-25 01:00:26.105774] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:03.538 [2024-07-25 01:00:26.106070] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:04.105 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:04.105 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:04.105 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:04.105 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:04.105 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:04.105 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:04.105 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.105 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:04.363 "name": "raid_bdev1", 00:34:04.363 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:04.363 "strip_size_kb": 64, 00:34:04.363 "state": "online", 00:34:04.363 "raid_level": "raid5f", 00:34:04.363 "superblock": true, 00:34:04.363 "num_base_bdevs": 3, 00:34:04.363 "num_base_bdevs_discovered": 3, 00:34:04.363 "num_base_bdevs_operational": 3, 00:34:04.363 "base_bdevs_list": [ 00:34:04.363 { 00:34:04.363 "name": "spare", 00:34:04.363 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:34:04.363 "is_configured": true, 00:34:04.363 "data_offset": 2048, 00:34:04.363 "data_size": 63488 00:34:04.363 }, 00:34:04.363 { 00:34:04.363 "name": "BaseBdev2", 00:34:04.363 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:04.363 "is_configured": true, 00:34:04.363 "data_offset": 2048, 00:34:04.363 "data_size": 63488 00:34:04.363 }, 00:34:04.363 { 00:34:04.363 "name": "BaseBdev3", 00:34:04.363 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:04.363 "is_configured": true, 00:34:04.363 "data_offset": 2048, 00:34:04.363 "data_size": 63488 00:34:04.363 } 00:34:04.363 ] 00:34:04.363 }' 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.363 01:00:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:04.621 "name": "raid_bdev1", 00:34:04.621 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:04.621 "strip_size_kb": 64, 00:34:04.621 "state": "online", 00:34:04.621 "raid_level": "raid5f", 00:34:04.621 "superblock": true, 00:34:04.621 "num_base_bdevs": 3, 00:34:04.621 "num_base_bdevs_discovered": 3, 00:34:04.621 "num_base_bdevs_operational": 3, 00:34:04.621 "base_bdevs_list": [ 00:34:04.621 { 00:34:04.621 "name": "spare", 00:34:04.621 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:34:04.621 "is_configured": true, 00:34:04.621 "data_offset": 2048, 00:34:04.621 "data_size": 63488 00:34:04.621 }, 00:34:04.621 { 00:34:04.621 "name": "BaseBdev2", 00:34:04.621 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:04.621 "is_configured": true, 00:34:04.621 "data_offset": 2048, 00:34:04.621 "data_size": 63488 00:34:04.621 }, 00:34:04.621 { 00:34:04.621 "name": "BaseBdev3", 00:34:04.621 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:04.621 "is_configured": true, 00:34:04.621 "data_offset": 2048, 00:34:04.621 "data_size": 63488 00:34:04.621 } 00:34:04.621 ] 00:34:04.621 }' 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.621 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.884 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:04.884 "name": "raid_bdev1", 00:34:04.884 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:04.884 "strip_size_kb": 64, 00:34:04.884 "state": "online", 00:34:04.884 "raid_level": "raid5f", 00:34:04.884 "superblock": true, 00:34:04.884 "num_base_bdevs": 3, 00:34:04.884 "num_base_bdevs_discovered": 3, 00:34:04.884 "num_base_bdevs_operational": 3, 00:34:04.884 "base_bdevs_list": [ 00:34:04.884 { 00:34:04.884 "name": "spare", 00:34:04.884 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:34:04.884 "is_configured": true, 00:34:04.884 "data_offset": 2048, 00:34:04.884 "data_size": 63488 00:34:04.884 }, 00:34:04.884 { 00:34:04.884 "name": "BaseBdev2", 00:34:04.884 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:04.884 "is_configured": true, 00:34:04.884 "data_offset": 2048, 00:34:04.884 "data_size": 63488 00:34:04.884 }, 00:34:04.884 { 00:34:04.884 "name": "BaseBdev3", 00:34:04.884 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:04.884 "is_configured": true, 00:34:04.884 "data_offset": 2048, 00:34:04.884 "data_size": 63488 00:34:04.884 } 00:34:04.884 ] 00:34:04.884 }' 00:34:04.884 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:04.884 01:00:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.451 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:05.709 [2024-07-25 01:00:28.201830] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:05.709 [2024-07-25 01:00:28.202013] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:05.709 [2024-07-25 01:00:28.202185] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:05.709 [2024-07-25 01:00:28.202396] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:05.709 [2024-07-25 01:00:28.202488] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:34:05.709 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:34:05.709 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:05.966 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:06.224 /dev/nbd0 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:06.224 1+0 records in 00:34:06.224 1+0 records out 00:34:06.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557651 s, 7.3 MB/s 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:06.224 01:00:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:34:06.482 /dev/nbd1 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:06.482 1+0 records in 00:34:06.482 1+0 records out 00:34:06.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977 s, 4.2 MB/s 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:06.482 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:06.740 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:34:06.740 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:06.740 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:06.740 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:06.740 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:06.740 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:06.740 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:06.997 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:06.997 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:06.997 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:06.997 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:06.997 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:06.997 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:06.997 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:06.997 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:06.998 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:06.998 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:34:07.296 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:07.296 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:07.296 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:07.296 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:07.296 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:07.296 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:07.296 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:07.296 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:07.296 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:34:07.296 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:07.296 01:00:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:07.568 [2024-07-25 01:00:30.102562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:07.568 [2024-07-25 01:00:30.102817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:07.568 [2024-07-25 01:00:30.103022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:07.568 [2024-07-25 01:00:30.103178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:07.568 [2024-07-25 01:00:30.106192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:07.568 [2024-07-25 01:00:30.106402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:07.568 [2024-07-25 01:00:30.106637] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:07.568 [2024-07-25 01:00:30.106810] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:07.568 [2024-07-25 01:00:30.107116] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:07.569 [2024-07-25 01:00:30.107350] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:07.569 spare 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.569 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:07.569 [2024-07-25 01:00:30.207539] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:34:07.569 [2024-07-25 01:00:30.207688] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:07.569 [2024-07-25 01:00:30.207893] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:34:07.569 [2024-07-25 01:00:30.214717] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:34:07.569 [2024-07-25 01:00:30.214841] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:34:07.569 [2024-07-25 01:00:30.215168] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:07.826 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:07.826 "name": "raid_bdev1", 00:34:07.826 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:07.826 "strip_size_kb": 64, 00:34:07.826 "state": "online", 00:34:07.826 "raid_level": "raid5f", 00:34:07.826 "superblock": true, 00:34:07.826 "num_base_bdevs": 3, 00:34:07.826 "num_base_bdevs_discovered": 3, 00:34:07.826 "num_base_bdevs_operational": 3, 00:34:07.826 "base_bdevs_list": [ 00:34:07.826 { 00:34:07.826 "name": "spare", 00:34:07.826 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:34:07.826 "is_configured": true, 00:34:07.826 "data_offset": 2048, 00:34:07.826 "data_size": 63488 00:34:07.826 }, 00:34:07.826 { 00:34:07.826 "name": "BaseBdev2", 00:34:07.826 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:07.826 "is_configured": true, 00:34:07.826 "data_offset": 2048, 00:34:07.826 "data_size": 63488 00:34:07.826 }, 00:34:07.826 { 00:34:07.826 "name": "BaseBdev3", 00:34:07.826 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:07.826 "is_configured": true, 00:34:07.826 "data_offset": 2048, 00:34:07.826 "data_size": 63488 00:34:07.826 } 00:34:07.826 ] 00:34:07.826 }' 00:34:07.826 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:07.826 01:00:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:08.760 "name": "raid_bdev1", 00:34:08.760 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:08.760 "strip_size_kb": 64, 00:34:08.760 "state": "online", 00:34:08.760 "raid_level": "raid5f", 00:34:08.760 "superblock": true, 00:34:08.760 "num_base_bdevs": 3, 00:34:08.760 "num_base_bdevs_discovered": 3, 00:34:08.760 "num_base_bdevs_operational": 3, 00:34:08.760 "base_bdevs_list": [ 00:34:08.760 { 00:34:08.760 "name": "spare", 00:34:08.760 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:34:08.760 "is_configured": true, 00:34:08.760 "data_offset": 2048, 00:34:08.760 "data_size": 63488 00:34:08.760 }, 00:34:08.760 { 00:34:08.760 "name": "BaseBdev2", 00:34:08.760 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:08.760 "is_configured": true, 00:34:08.760 "data_offset": 2048, 00:34:08.760 "data_size": 63488 00:34:08.760 }, 00:34:08.760 { 00:34:08.760 "name": "BaseBdev3", 00:34:08.760 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:08.760 "is_configured": true, 00:34:08.760 "data_offset": 2048, 00:34:08.760 "data_size": 63488 00:34:08.760 } 00:34:08.760 ] 00:34:08.760 }' 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:08.760 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.019 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:34:09.019 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:09.288 [2024-07-25 01:00:31.887098] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.288 01:00:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:09.549 01:00:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:09.549 "name": "raid_bdev1", 00:34:09.549 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:09.549 "strip_size_kb": 64, 00:34:09.549 "state": "online", 00:34:09.549 "raid_level": "raid5f", 00:34:09.549 "superblock": true, 00:34:09.549 "num_base_bdevs": 3, 00:34:09.549 "num_base_bdevs_discovered": 2, 00:34:09.549 "num_base_bdevs_operational": 2, 00:34:09.549 "base_bdevs_list": [ 00:34:09.549 { 00:34:09.549 "name": null, 00:34:09.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.549 "is_configured": false, 00:34:09.549 "data_offset": 2048, 00:34:09.549 "data_size": 63488 00:34:09.549 }, 00:34:09.549 { 00:34:09.549 "name": "BaseBdev2", 00:34:09.549 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:09.549 "is_configured": true, 00:34:09.549 "data_offset": 2048, 00:34:09.549 "data_size": 63488 00:34:09.549 }, 00:34:09.549 { 00:34:09.549 "name": "BaseBdev3", 00:34:09.549 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:09.549 "is_configured": true, 00:34:09.549 "data_offset": 2048, 00:34:09.549 "data_size": 63488 00:34:09.549 } 00:34:09.549 ] 00:34:09.549 }' 00:34:09.549 01:00:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:09.549 01:00:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:10.115 01:00:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:10.374 [2024-07-25 01:00:32.915074] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:10.374 [2024-07-25 01:00:32.915383] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:10.374 [2024-07-25 01:00:32.915489] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:10.374 [2024-07-25 01:00:32.915577] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:10.374 [2024-07-25 01:00:32.930895] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047a40 00:34:10.374 [2024-07-25 01:00:32.938172] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:10.374 01:00:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:34:11.307 01:00:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:11.307 01:00:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:11.307 01:00:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:11.307 01:00:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:11.307 01:00:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:11.565 01:00:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:11.565 01:00:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.565 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:11.565 "name": "raid_bdev1", 00:34:11.565 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:11.565 "strip_size_kb": 64, 00:34:11.565 "state": "online", 00:34:11.565 "raid_level": "raid5f", 00:34:11.565 "superblock": true, 00:34:11.565 "num_base_bdevs": 3, 00:34:11.565 "num_base_bdevs_discovered": 3, 00:34:11.565 "num_base_bdevs_operational": 3, 00:34:11.565 "process": { 00:34:11.565 "type": "rebuild", 00:34:11.565 "target": "spare", 00:34:11.565 "progress": { 00:34:11.565 "blocks": 24576, 00:34:11.565 "percent": 19 00:34:11.565 } 00:34:11.565 }, 00:34:11.565 "base_bdevs_list": [ 00:34:11.565 { 00:34:11.565 "name": "spare", 00:34:11.565 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:34:11.565 "is_configured": true, 00:34:11.565 "data_offset": 2048, 00:34:11.565 "data_size": 63488 00:34:11.565 }, 00:34:11.565 { 00:34:11.565 "name": "BaseBdev2", 00:34:11.565 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:11.565 "is_configured": true, 00:34:11.565 "data_offset": 2048, 00:34:11.565 "data_size": 63488 00:34:11.565 }, 00:34:11.565 { 00:34:11.565 "name": "BaseBdev3", 00:34:11.565 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:11.565 "is_configured": true, 00:34:11.565 "data_offset": 2048, 00:34:11.565 "data_size": 63488 00:34:11.565 } 00:34:11.565 ] 00:34:11.565 }' 00:34:11.822 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:11.822 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:11.822 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:11.822 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:11.822 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:12.089 [2024-07-25 01:00:34.536634] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:12.089 [2024-07-25 01:00:34.552783] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:12.089 [2024-07-25 01:00:34.552969] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:12.089 [2024-07-25 01:00:34.553022] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:12.089 [2024-07-25 01:00:34.553101] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:12.089 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:12.347 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:12.347 "name": "raid_bdev1", 00:34:12.347 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:12.347 "strip_size_kb": 64, 00:34:12.347 "state": "online", 00:34:12.347 "raid_level": "raid5f", 00:34:12.347 "superblock": true, 00:34:12.347 "num_base_bdevs": 3, 00:34:12.347 "num_base_bdevs_discovered": 2, 00:34:12.347 "num_base_bdevs_operational": 2, 00:34:12.347 "base_bdevs_list": [ 00:34:12.347 { 00:34:12.347 "name": null, 00:34:12.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:12.347 "is_configured": false, 00:34:12.347 "data_offset": 2048, 00:34:12.347 "data_size": 63488 00:34:12.347 }, 00:34:12.347 { 00:34:12.347 "name": "BaseBdev2", 00:34:12.348 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:12.348 "is_configured": true, 00:34:12.348 "data_offset": 2048, 00:34:12.348 "data_size": 63488 00:34:12.348 }, 00:34:12.348 { 00:34:12.348 "name": "BaseBdev3", 00:34:12.348 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:12.348 "is_configured": true, 00:34:12.348 "data_offset": 2048, 00:34:12.348 "data_size": 63488 00:34:12.348 } 00:34:12.348 ] 00:34:12.348 }' 00:34:12.348 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:12.348 01:00:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:12.915 01:00:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:13.173 [2024-07-25 01:00:35.573019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:13.173 [2024-07-25 01:00:35.573253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:13.173 [2024-07-25 01:00:35.573322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:34:13.173 [2024-07-25 01:00:35.573423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:13.173 [2024-07-25 01:00:35.573981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:13.173 [2024-07-25 01:00:35.574134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:13.173 [2024-07-25 01:00:35.574380] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:13.173 [2024-07-25 01:00:35.574474] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:13.173 [2024-07-25 01:00:35.574551] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:13.173 [2024-07-25 01:00:35.574634] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:13.173 [2024-07-25 01:00:35.589751] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047d80 00:34:13.173 spare 00:34:13.173 [2024-07-25 01:00:35.597391] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:13.174 01:00:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:34:14.106 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:14.106 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:14.106 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:14.106 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:14.106 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:14.106 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:14.106 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:14.364 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:14.364 "name": "raid_bdev1", 00:34:14.364 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:14.364 "strip_size_kb": 64, 00:34:14.364 "state": "online", 00:34:14.364 "raid_level": "raid5f", 00:34:14.364 "superblock": true, 00:34:14.364 "num_base_bdevs": 3, 00:34:14.364 "num_base_bdevs_discovered": 3, 00:34:14.364 "num_base_bdevs_operational": 3, 00:34:14.364 "process": { 00:34:14.364 "type": "rebuild", 00:34:14.364 "target": "spare", 00:34:14.364 "progress": { 00:34:14.364 "blocks": 24576, 00:34:14.364 "percent": 19 00:34:14.364 } 00:34:14.364 }, 00:34:14.364 "base_bdevs_list": [ 00:34:14.364 { 00:34:14.364 "name": "spare", 00:34:14.364 "uuid": "24a6c848-6d1f-52ca-820b-b91f6d0b8885", 00:34:14.364 "is_configured": true, 00:34:14.364 "data_offset": 2048, 00:34:14.364 "data_size": 63488 00:34:14.364 }, 00:34:14.364 { 00:34:14.364 "name": "BaseBdev2", 00:34:14.364 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:14.364 "is_configured": true, 00:34:14.364 "data_offset": 2048, 00:34:14.364 "data_size": 63488 00:34:14.364 }, 00:34:14.364 { 00:34:14.365 "name": "BaseBdev3", 00:34:14.365 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:14.365 "is_configured": true, 00:34:14.365 "data_offset": 2048, 00:34:14.365 "data_size": 63488 00:34:14.365 } 00:34:14.365 ] 00:34:14.365 }' 00:34:14.365 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:14.365 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:14.365 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:14.365 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:14.365 01:00:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:14.623 [2024-07-25 01:00:37.187042] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:14.623 [2024-07-25 01:00:37.212049] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:14.623 [2024-07-25 01:00:37.212300] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:14.623 [2024-07-25 01:00:37.212353] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:14.623 [2024-07-25 01:00:37.212432] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:14.623 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:14.881 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:14.881 "name": "raid_bdev1", 00:34:14.881 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:14.881 "strip_size_kb": 64, 00:34:14.881 "state": "online", 00:34:14.881 "raid_level": "raid5f", 00:34:14.881 "superblock": true, 00:34:14.881 "num_base_bdevs": 3, 00:34:14.881 "num_base_bdevs_discovered": 2, 00:34:14.881 "num_base_bdevs_operational": 2, 00:34:14.881 "base_bdevs_list": [ 00:34:14.881 { 00:34:14.881 "name": null, 00:34:14.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:14.881 "is_configured": false, 00:34:14.881 "data_offset": 2048, 00:34:14.881 "data_size": 63488 00:34:14.881 }, 00:34:14.881 { 00:34:14.881 "name": "BaseBdev2", 00:34:14.881 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:14.881 "is_configured": true, 00:34:14.881 "data_offset": 2048, 00:34:14.881 "data_size": 63488 00:34:14.881 }, 00:34:14.881 { 00:34:14.881 "name": "BaseBdev3", 00:34:14.881 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:14.881 "is_configured": true, 00:34:14.881 "data_offset": 2048, 00:34:14.881 "data_size": 63488 00:34:14.881 } 00:34:14.881 ] 00:34:14.881 }' 00:34:14.881 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:14.881 01:00:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:15.448 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:15.448 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:15.448 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:15.448 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:15.448 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:15.448 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:15.448 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:15.706 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:15.706 "name": "raid_bdev1", 00:34:15.706 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:15.706 "strip_size_kb": 64, 00:34:15.706 "state": "online", 00:34:15.706 "raid_level": "raid5f", 00:34:15.706 "superblock": true, 00:34:15.706 "num_base_bdevs": 3, 00:34:15.706 "num_base_bdevs_discovered": 2, 00:34:15.706 "num_base_bdevs_operational": 2, 00:34:15.706 "base_bdevs_list": [ 00:34:15.706 { 00:34:15.706 "name": null, 00:34:15.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:15.706 "is_configured": false, 00:34:15.706 "data_offset": 2048, 00:34:15.706 "data_size": 63488 00:34:15.706 }, 00:34:15.706 { 00:34:15.706 "name": "BaseBdev2", 00:34:15.706 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:15.706 "is_configured": true, 00:34:15.706 "data_offset": 2048, 00:34:15.706 "data_size": 63488 00:34:15.706 }, 00:34:15.706 { 00:34:15.706 "name": "BaseBdev3", 00:34:15.706 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:15.706 "is_configured": true, 00:34:15.706 "data_offset": 2048, 00:34:15.706 "data_size": 63488 00:34:15.706 } 00:34:15.706 ] 00:34:15.706 }' 00:34:15.706 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:15.706 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:15.706 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:15.706 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:15.706 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:34:15.964 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:16.222 [2024-07-25 01:00:38.748657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:16.222 [2024-07-25 01:00:38.748973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:16.222 [2024-07-25 01:00:38.749057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:34:16.222 [2024-07-25 01:00:38.749171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:16.222 [2024-07-25 01:00:38.749700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:16.222 [2024-07-25 01:00:38.749846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:16.222 [2024-07-25 01:00:38.750071] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:16.222 [2024-07-25 01:00:38.750184] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:16.222 [2024-07-25 01:00:38.750292] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:16.222 BaseBdev1 00:34:16.222 01:00:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.156 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.415 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:17.415 "name": "raid_bdev1", 00:34:17.415 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:17.415 "strip_size_kb": 64, 00:34:17.415 "state": "online", 00:34:17.415 "raid_level": "raid5f", 00:34:17.415 "superblock": true, 00:34:17.415 "num_base_bdevs": 3, 00:34:17.415 "num_base_bdevs_discovered": 2, 00:34:17.415 "num_base_bdevs_operational": 2, 00:34:17.415 "base_bdevs_list": [ 00:34:17.415 { 00:34:17.415 "name": null, 00:34:17.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.415 "is_configured": false, 00:34:17.415 "data_offset": 2048, 00:34:17.415 "data_size": 63488 00:34:17.415 }, 00:34:17.415 { 00:34:17.415 "name": "BaseBdev2", 00:34:17.415 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:17.415 "is_configured": true, 00:34:17.415 "data_offset": 2048, 00:34:17.415 "data_size": 63488 00:34:17.415 }, 00:34:17.415 { 00:34:17.415 "name": "BaseBdev3", 00:34:17.415 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:17.415 "is_configured": true, 00:34:17.415 "data_offset": 2048, 00:34:17.415 "data_size": 63488 00:34:17.415 } 00:34:17.415 ] 00:34:17.415 }' 00:34:17.415 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:17.415 01:00:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:17.982 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:17.982 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:17.982 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:17.982 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:17.982 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:17.982 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.982 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:18.241 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:18.241 "name": "raid_bdev1", 00:34:18.241 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:18.241 "strip_size_kb": 64, 00:34:18.241 "state": "online", 00:34:18.241 "raid_level": "raid5f", 00:34:18.241 "superblock": true, 00:34:18.241 "num_base_bdevs": 3, 00:34:18.241 "num_base_bdevs_discovered": 2, 00:34:18.241 "num_base_bdevs_operational": 2, 00:34:18.241 "base_bdevs_list": [ 00:34:18.241 { 00:34:18.241 "name": null, 00:34:18.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.241 "is_configured": false, 00:34:18.241 "data_offset": 2048, 00:34:18.241 "data_size": 63488 00:34:18.241 }, 00:34:18.241 { 00:34:18.241 "name": "BaseBdev2", 00:34:18.241 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:18.241 "is_configured": true, 00:34:18.241 "data_offset": 2048, 00:34:18.241 "data_size": 63488 00:34:18.241 }, 00:34:18.241 { 00:34:18.241 "name": "BaseBdev3", 00:34:18.241 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:18.241 "is_configured": true, 00:34:18.241 "data_offset": 2048, 00:34:18.241 "data_size": 63488 00:34:18.241 } 00:34:18.241 ] 00:34:18.241 }' 00:34:18.241 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:18.241 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:18.241 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:18.500 01:00:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:18.759 [2024-07-25 01:00:41.185413] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:18.759 [2024-07-25 01:00:41.185723] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:18.759 [2024-07-25 01:00:41.185828] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:18.759 request: 00:34:18.759 { 00:34:18.759 "base_bdev": "BaseBdev1", 00:34:18.759 "raid_bdev": "raid_bdev1", 00:34:18.759 "method": "bdev_raid_add_base_bdev", 00:34:18.759 "req_id": 1 00:34:18.759 } 00:34:18.759 Got JSON-RPC error response 00:34:18.759 response: 00:34:18.759 { 00:34:18.759 "code": -22, 00:34:18.759 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:18.759 } 00:34:18.759 01:00:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:34:18.759 01:00:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:18.759 01:00:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:18.759 01:00:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:18.759 01:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.693 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.951 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:19.951 "name": "raid_bdev1", 00:34:19.951 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:19.951 "strip_size_kb": 64, 00:34:19.951 "state": "online", 00:34:19.951 "raid_level": "raid5f", 00:34:19.951 "superblock": true, 00:34:19.951 "num_base_bdevs": 3, 00:34:19.951 "num_base_bdevs_discovered": 2, 00:34:19.951 "num_base_bdevs_operational": 2, 00:34:19.951 "base_bdevs_list": [ 00:34:19.951 { 00:34:19.951 "name": null, 00:34:19.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.951 "is_configured": false, 00:34:19.951 "data_offset": 2048, 00:34:19.951 "data_size": 63488 00:34:19.951 }, 00:34:19.951 { 00:34:19.951 "name": "BaseBdev2", 00:34:19.951 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:19.951 "is_configured": true, 00:34:19.951 "data_offset": 2048, 00:34:19.951 "data_size": 63488 00:34:19.951 }, 00:34:19.951 { 00:34:19.951 "name": "BaseBdev3", 00:34:19.951 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:19.951 "is_configured": true, 00:34:19.951 "data_offset": 2048, 00:34:19.951 "data_size": 63488 00:34:19.951 } 00:34:19.951 ] 00:34:19.951 }' 00:34:19.951 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:19.951 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:20.518 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:20.518 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:20.518 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:20.518 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:20.518 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:20.518 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:20.518 01:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:20.518 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:20.518 "name": "raid_bdev1", 00:34:20.518 "uuid": "14f795d0-7307-40fb-921e-6ee7eb746130", 00:34:20.518 "strip_size_kb": 64, 00:34:20.518 "state": "online", 00:34:20.518 "raid_level": "raid5f", 00:34:20.518 "superblock": true, 00:34:20.518 "num_base_bdevs": 3, 00:34:20.518 "num_base_bdevs_discovered": 2, 00:34:20.518 "num_base_bdevs_operational": 2, 00:34:20.518 "base_bdevs_list": [ 00:34:20.518 { 00:34:20.518 "name": null, 00:34:20.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.518 "is_configured": false, 00:34:20.518 "data_offset": 2048, 00:34:20.518 "data_size": 63488 00:34:20.518 }, 00:34:20.518 { 00:34:20.518 "name": "BaseBdev2", 00:34:20.518 "uuid": "ac1f0dc7-4216-5f16-9872-7b2e1ee4fe67", 00:34:20.518 "is_configured": true, 00:34:20.518 "data_offset": 2048, 00:34:20.518 "data_size": 63488 00:34:20.518 }, 00:34:20.518 { 00:34:20.518 "name": "BaseBdev3", 00:34:20.518 "uuid": "c01a62f6-3ea5-554a-a091-33fb8c7e7655", 00:34:20.518 "is_configured": true, 00:34:20.518 "data_offset": 2048, 00:34:20.518 "data_size": 63488 00:34:20.518 } 00:34:20.518 ] 00:34:20.518 }' 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 153591 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 153591 ']' 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 153591 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 153591 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 153591' 00:34:20.777 killing process with pid 153591 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 153591 00:34:20.777 Received shutdown signal, test time was about 60.000000 seconds 00:34:20.777 00:34:20.777 Latency(us) 00:34:20.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.777 =================================================================================================================== 00:34:20.777 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:20.777 01:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 153591 00:34:20.777 [2024-07-25 01:00:43.277435] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:20.777 [2024-07-25 01:00:43.277545] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:20.778 [2024-07-25 01:00:43.277607] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:20.778 [2024-07-25 01:00:43.277616] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:34:21.036 [2024-07-25 01:00:43.650598] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:22.411 ************************************ 00:34:22.411 END TEST raid5f_rebuild_test_sb 00:34:22.411 ************************************ 00:34:22.411 01:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:34:22.411 00:34:22.411 real 0m34.640s 00:34:22.411 user 0m53.122s 00:34:22.411 sys 0m4.369s 00:34:22.411 01:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:22.411 01:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:22.411 01:00:44 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:34:22.411 01:00:44 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:34:22.411 01:00:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:34:22.411 01:00:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:22.411 01:00:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:22.411 ************************************ 00:34:22.411 START TEST raid5f_state_function_test 00:34:22.411 ************************************ 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 false 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:34:22.411 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=154521 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 154521' 00:34:22.412 Process raid pid: 154521 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 154521 /var/tmp/spdk-raid.sock 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 154521 ']' 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:22.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:22.412 01:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.670 [2024-07-25 01:00:45.064999] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:34:22.670 [2024-07-25 01:00:45.065433] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.670 [2024-07-25 01:00:45.247809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.928 [2024-07-25 01:00:45.440835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.187 [2024-07-25 01:00:45.646676] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:23.446 01:00:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:23.446 01:00:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:34:23.446 01:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:23.705 [2024-07-25 01:00:46.125140] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:23.705 [2024-07-25 01:00:46.125392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:23.705 [2024-07-25 01:00:46.125516] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:23.705 [2024-07-25 01:00:46.125574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:23.705 [2024-07-25 01:00:46.125648] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:23.705 [2024-07-25 01:00:46.125692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:23.705 [2024-07-25 01:00:46.125717] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:23.705 [2024-07-25 01:00:46.125798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:23.705 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:23.964 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:23.964 "name": "Existed_Raid", 00:34:23.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.964 "strip_size_kb": 64, 00:34:23.964 "state": "configuring", 00:34:23.964 "raid_level": "raid5f", 00:34:23.964 "superblock": false, 00:34:23.964 "num_base_bdevs": 4, 00:34:23.964 "num_base_bdevs_discovered": 0, 00:34:23.964 "num_base_bdevs_operational": 4, 00:34:23.964 "base_bdevs_list": [ 00:34:23.964 { 00:34:23.964 "name": "BaseBdev1", 00:34:23.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.964 "is_configured": false, 00:34:23.964 "data_offset": 0, 00:34:23.964 "data_size": 0 00:34:23.964 }, 00:34:23.964 { 00:34:23.964 "name": "BaseBdev2", 00:34:23.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.964 "is_configured": false, 00:34:23.964 "data_offset": 0, 00:34:23.964 "data_size": 0 00:34:23.964 }, 00:34:23.964 { 00:34:23.964 "name": "BaseBdev3", 00:34:23.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.964 "is_configured": false, 00:34:23.964 "data_offset": 0, 00:34:23.964 "data_size": 0 00:34:23.964 }, 00:34:23.964 { 00:34:23.964 "name": "BaseBdev4", 00:34:23.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.964 "is_configured": false, 00:34:23.964 "data_offset": 0, 00:34:23.964 "data_size": 0 00:34:23.964 } 00:34:23.964 ] 00:34:23.964 }' 00:34:23.964 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:23.964 01:00:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.532 01:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:24.532 [2024-07-25 01:00:47.097249] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:24.532 [2024-07-25 01:00:47.097400] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:34:24.532 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:24.790 [2024-07-25 01:00:47.281291] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:24.790 [2024-07-25 01:00:47.281351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:24.790 [2024-07-25 01:00:47.281360] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:24.790 [2024-07-25 01:00:47.281405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:24.790 [2024-07-25 01:00:47.281413] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:24.790 [2024-07-25 01:00:47.281442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:24.790 [2024-07-25 01:00:47.281449] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:24.790 [2024-07-25 01:00:47.281471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:24.790 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:25.049 [2024-07-25 01:00:47.580526] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:25.049 BaseBdev1 00:34:25.049 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:34:25.049 01:00:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:25.049 01:00:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:25.049 01:00:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:25.049 01:00:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:25.049 01:00:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:25.049 01:00:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:25.307 01:00:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:25.307 [ 00:34:25.307 { 00:34:25.307 "name": "BaseBdev1", 00:34:25.307 "aliases": [ 00:34:25.307 "30eac619-b5a2-4078-a893-07c22131a50a" 00:34:25.307 ], 00:34:25.307 "product_name": "Malloc disk", 00:34:25.307 "block_size": 512, 00:34:25.307 "num_blocks": 65536, 00:34:25.307 "uuid": "30eac619-b5a2-4078-a893-07c22131a50a", 00:34:25.307 "assigned_rate_limits": { 00:34:25.307 "rw_ios_per_sec": 0, 00:34:25.307 "rw_mbytes_per_sec": 0, 00:34:25.307 "r_mbytes_per_sec": 0, 00:34:25.307 "w_mbytes_per_sec": 0 00:34:25.307 }, 00:34:25.307 "claimed": true, 00:34:25.307 "claim_type": "exclusive_write", 00:34:25.307 "zoned": false, 00:34:25.307 "supported_io_types": { 00:34:25.307 "read": true, 00:34:25.307 "write": true, 00:34:25.307 "unmap": true, 00:34:25.307 "flush": true, 00:34:25.307 "reset": true, 00:34:25.307 "nvme_admin": false, 00:34:25.307 "nvme_io": false, 00:34:25.307 "nvme_io_md": false, 00:34:25.307 "write_zeroes": true, 00:34:25.307 "zcopy": true, 00:34:25.307 "get_zone_info": false, 00:34:25.307 "zone_management": false, 00:34:25.307 "zone_append": false, 00:34:25.307 "compare": false, 00:34:25.307 "compare_and_write": false, 00:34:25.307 "abort": true, 00:34:25.307 "seek_hole": false, 00:34:25.307 "seek_data": false, 00:34:25.307 "copy": true, 00:34:25.307 "nvme_iov_md": false 00:34:25.307 }, 00:34:25.307 "memory_domains": [ 00:34:25.307 { 00:34:25.307 "dma_device_id": "system", 00:34:25.307 "dma_device_type": 1 00:34:25.307 }, 00:34:25.307 { 00:34:25.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:25.307 "dma_device_type": 2 00:34:25.307 } 00:34:25.307 ], 00:34:25.307 "driver_specific": {} 00:34:25.307 } 00:34:25.307 ] 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:25.566 01:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:25.566 01:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:25.566 "name": "Existed_Raid", 00:34:25.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:25.566 "strip_size_kb": 64, 00:34:25.566 "state": "configuring", 00:34:25.566 "raid_level": "raid5f", 00:34:25.566 "superblock": false, 00:34:25.566 "num_base_bdevs": 4, 00:34:25.566 "num_base_bdevs_discovered": 1, 00:34:25.566 "num_base_bdevs_operational": 4, 00:34:25.566 "base_bdevs_list": [ 00:34:25.566 { 00:34:25.566 "name": "BaseBdev1", 00:34:25.566 "uuid": "30eac619-b5a2-4078-a893-07c22131a50a", 00:34:25.566 "is_configured": true, 00:34:25.566 "data_offset": 0, 00:34:25.566 "data_size": 65536 00:34:25.566 }, 00:34:25.566 { 00:34:25.566 "name": "BaseBdev2", 00:34:25.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:25.566 "is_configured": false, 00:34:25.566 "data_offset": 0, 00:34:25.566 "data_size": 0 00:34:25.566 }, 00:34:25.566 { 00:34:25.566 "name": "BaseBdev3", 00:34:25.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:25.566 "is_configured": false, 00:34:25.566 "data_offset": 0, 00:34:25.566 "data_size": 0 00:34:25.566 }, 00:34:25.566 { 00:34:25.566 "name": "BaseBdev4", 00:34:25.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:25.566 "is_configured": false, 00:34:25.566 "data_offset": 0, 00:34:25.566 "data_size": 0 00:34:25.566 } 00:34:25.566 ] 00:34:25.566 }' 00:34:25.566 01:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:25.566 01:00:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:26.134 01:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:26.405 [2024-07-25 01:00:48.880798] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:26.405 [2024-07-25 01:00:48.880845] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:34:26.405 01:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:26.689 [2024-07-25 01:00:49.152884] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:26.689 [2024-07-25 01:00:49.154820] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:26.689 [2024-07-25 01:00:49.154875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:26.689 [2024-07-25 01:00:49.154885] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:26.689 [2024-07-25 01:00:49.154926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:26.689 [2024-07-25 01:00:49.154934] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:26.689 [2024-07-25 01:00:49.154950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:26.689 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.948 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:26.948 "name": "Existed_Raid", 00:34:26.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.948 "strip_size_kb": 64, 00:34:26.948 "state": "configuring", 00:34:26.948 "raid_level": "raid5f", 00:34:26.948 "superblock": false, 00:34:26.948 "num_base_bdevs": 4, 00:34:26.948 "num_base_bdevs_discovered": 1, 00:34:26.948 "num_base_bdevs_operational": 4, 00:34:26.948 "base_bdevs_list": [ 00:34:26.948 { 00:34:26.948 "name": "BaseBdev1", 00:34:26.948 "uuid": "30eac619-b5a2-4078-a893-07c22131a50a", 00:34:26.948 "is_configured": true, 00:34:26.948 "data_offset": 0, 00:34:26.948 "data_size": 65536 00:34:26.948 }, 00:34:26.948 { 00:34:26.948 "name": "BaseBdev2", 00:34:26.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.948 "is_configured": false, 00:34:26.948 "data_offset": 0, 00:34:26.948 "data_size": 0 00:34:26.948 }, 00:34:26.948 { 00:34:26.948 "name": "BaseBdev3", 00:34:26.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.948 "is_configured": false, 00:34:26.948 "data_offset": 0, 00:34:26.948 "data_size": 0 00:34:26.948 }, 00:34:26.948 { 00:34:26.948 "name": "BaseBdev4", 00:34:26.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.948 "is_configured": false, 00:34:26.948 "data_offset": 0, 00:34:26.948 "data_size": 0 00:34:26.948 } 00:34:26.948 ] 00:34:26.948 }' 00:34:26.948 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:26.949 01:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:27.516 01:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:27.516 [2024-07-25 01:00:50.137830] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:27.516 BaseBdev2 00:34:27.516 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:34:27.516 01:00:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:34:27.516 01:00:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:27.516 01:00:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:27.516 01:00:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:27.516 01:00:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:27.516 01:00:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:27.774 01:00:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:28.033 [ 00:34:28.033 { 00:34:28.033 "name": "BaseBdev2", 00:34:28.033 "aliases": [ 00:34:28.033 "eebf13ea-42cd-4b5f-8248-a511cebfb533" 00:34:28.033 ], 00:34:28.033 "product_name": "Malloc disk", 00:34:28.033 "block_size": 512, 00:34:28.033 "num_blocks": 65536, 00:34:28.033 "uuid": "eebf13ea-42cd-4b5f-8248-a511cebfb533", 00:34:28.033 "assigned_rate_limits": { 00:34:28.033 "rw_ios_per_sec": 0, 00:34:28.033 "rw_mbytes_per_sec": 0, 00:34:28.033 "r_mbytes_per_sec": 0, 00:34:28.033 "w_mbytes_per_sec": 0 00:34:28.033 }, 00:34:28.033 "claimed": true, 00:34:28.033 "claim_type": "exclusive_write", 00:34:28.033 "zoned": false, 00:34:28.033 "supported_io_types": { 00:34:28.033 "read": true, 00:34:28.033 "write": true, 00:34:28.033 "unmap": true, 00:34:28.033 "flush": true, 00:34:28.033 "reset": true, 00:34:28.033 "nvme_admin": false, 00:34:28.033 "nvme_io": false, 00:34:28.033 "nvme_io_md": false, 00:34:28.033 "write_zeroes": true, 00:34:28.033 "zcopy": true, 00:34:28.033 "get_zone_info": false, 00:34:28.033 "zone_management": false, 00:34:28.033 "zone_append": false, 00:34:28.033 "compare": false, 00:34:28.033 "compare_and_write": false, 00:34:28.033 "abort": true, 00:34:28.033 "seek_hole": false, 00:34:28.033 "seek_data": false, 00:34:28.033 "copy": true, 00:34:28.033 "nvme_iov_md": false 00:34:28.033 }, 00:34:28.033 "memory_domains": [ 00:34:28.033 { 00:34:28.033 "dma_device_id": "system", 00:34:28.033 "dma_device_type": 1 00:34:28.033 }, 00:34:28.033 { 00:34:28.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:28.033 "dma_device_type": 2 00:34:28.033 } 00:34:28.033 ], 00:34:28.033 "driver_specific": {} 00:34:28.033 } 00:34:28.033 ] 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.033 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:28.292 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:28.292 "name": "Existed_Raid", 00:34:28.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.292 "strip_size_kb": 64, 00:34:28.292 "state": "configuring", 00:34:28.293 "raid_level": "raid5f", 00:34:28.293 "superblock": false, 00:34:28.293 "num_base_bdevs": 4, 00:34:28.293 "num_base_bdevs_discovered": 2, 00:34:28.293 "num_base_bdevs_operational": 4, 00:34:28.293 "base_bdevs_list": [ 00:34:28.293 { 00:34:28.293 "name": "BaseBdev1", 00:34:28.293 "uuid": "30eac619-b5a2-4078-a893-07c22131a50a", 00:34:28.293 "is_configured": true, 00:34:28.293 "data_offset": 0, 00:34:28.293 "data_size": 65536 00:34:28.293 }, 00:34:28.293 { 00:34:28.293 "name": "BaseBdev2", 00:34:28.293 "uuid": "eebf13ea-42cd-4b5f-8248-a511cebfb533", 00:34:28.293 "is_configured": true, 00:34:28.293 "data_offset": 0, 00:34:28.293 "data_size": 65536 00:34:28.293 }, 00:34:28.293 { 00:34:28.293 "name": "BaseBdev3", 00:34:28.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.293 "is_configured": false, 00:34:28.293 "data_offset": 0, 00:34:28.293 "data_size": 0 00:34:28.293 }, 00:34:28.293 { 00:34:28.293 "name": "BaseBdev4", 00:34:28.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.293 "is_configured": false, 00:34:28.293 "data_offset": 0, 00:34:28.293 "data_size": 0 00:34:28.293 } 00:34:28.293 ] 00:34:28.293 }' 00:34:28.293 01:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:28.293 01:00:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:28.860 01:00:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:29.119 [2024-07-25 01:00:51.578987] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:29.119 BaseBdev3 00:34:29.119 01:00:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:34:29.119 01:00:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:34:29.119 01:00:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:29.119 01:00:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:29.119 01:00:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:29.119 01:00:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:29.119 01:00:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:29.378 01:00:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:29.378 [ 00:34:29.378 { 00:34:29.378 "name": "BaseBdev3", 00:34:29.378 "aliases": [ 00:34:29.378 "99a93295-195e-4210-bf36-9088d28a7816" 00:34:29.378 ], 00:34:29.378 "product_name": "Malloc disk", 00:34:29.378 "block_size": 512, 00:34:29.378 "num_blocks": 65536, 00:34:29.378 "uuid": "99a93295-195e-4210-bf36-9088d28a7816", 00:34:29.378 "assigned_rate_limits": { 00:34:29.378 "rw_ios_per_sec": 0, 00:34:29.378 "rw_mbytes_per_sec": 0, 00:34:29.378 "r_mbytes_per_sec": 0, 00:34:29.378 "w_mbytes_per_sec": 0 00:34:29.378 }, 00:34:29.378 "claimed": true, 00:34:29.378 "claim_type": "exclusive_write", 00:34:29.378 "zoned": false, 00:34:29.378 "supported_io_types": { 00:34:29.378 "read": true, 00:34:29.378 "write": true, 00:34:29.378 "unmap": true, 00:34:29.378 "flush": true, 00:34:29.378 "reset": true, 00:34:29.378 "nvme_admin": false, 00:34:29.378 "nvme_io": false, 00:34:29.378 "nvme_io_md": false, 00:34:29.378 "write_zeroes": true, 00:34:29.378 "zcopy": true, 00:34:29.378 "get_zone_info": false, 00:34:29.378 "zone_management": false, 00:34:29.378 "zone_append": false, 00:34:29.378 "compare": false, 00:34:29.378 "compare_and_write": false, 00:34:29.378 "abort": true, 00:34:29.378 "seek_hole": false, 00:34:29.378 "seek_data": false, 00:34:29.378 "copy": true, 00:34:29.378 "nvme_iov_md": false 00:34:29.378 }, 00:34:29.378 "memory_domains": [ 00:34:29.378 { 00:34:29.378 "dma_device_id": "system", 00:34:29.378 "dma_device_type": 1 00:34:29.378 }, 00:34:29.378 { 00:34:29.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:29.378 "dma_device_type": 2 00:34:29.378 } 00:34:29.378 ], 00:34:29.378 "driver_specific": {} 00:34:29.378 } 00:34:29.378 ] 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:29.638 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:29.638 "name": "Existed_Raid", 00:34:29.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.639 "strip_size_kb": 64, 00:34:29.639 "state": "configuring", 00:34:29.639 "raid_level": "raid5f", 00:34:29.639 "superblock": false, 00:34:29.639 "num_base_bdevs": 4, 00:34:29.639 "num_base_bdevs_discovered": 3, 00:34:29.639 "num_base_bdevs_operational": 4, 00:34:29.639 "base_bdevs_list": [ 00:34:29.639 { 00:34:29.639 "name": "BaseBdev1", 00:34:29.639 "uuid": "30eac619-b5a2-4078-a893-07c22131a50a", 00:34:29.639 "is_configured": true, 00:34:29.639 "data_offset": 0, 00:34:29.639 "data_size": 65536 00:34:29.639 }, 00:34:29.639 { 00:34:29.639 "name": "BaseBdev2", 00:34:29.639 "uuid": "eebf13ea-42cd-4b5f-8248-a511cebfb533", 00:34:29.639 "is_configured": true, 00:34:29.639 "data_offset": 0, 00:34:29.639 "data_size": 65536 00:34:29.639 }, 00:34:29.639 { 00:34:29.639 "name": "BaseBdev3", 00:34:29.639 "uuid": "99a93295-195e-4210-bf36-9088d28a7816", 00:34:29.639 "is_configured": true, 00:34:29.639 "data_offset": 0, 00:34:29.639 "data_size": 65536 00:34:29.639 }, 00:34:29.639 { 00:34:29.639 "name": "BaseBdev4", 00:34:29.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.639 "is_configured": false, 00:34:29.639 "data_offset": 0, 00:34:29.639 "data_size": 0 00:34:29.639 } 00:34:29.639 ] 00:34:29.639 }' 00:34:29.639 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:29.639 01:00:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:30.207 01:00:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:34:30.466 [2024-07-25 01:00:53.115258] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:30.466 [2024-07-25 01:00:53.115324] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:34:30.466 [2024-07-25 01:00:53.115333] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:34:30.466 [2024-07-25 01:00:53.115454] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:34:30.724 [2024-07-25 01:00:53.122776] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:34:30.724 [2024-07-25 01:00:53.122803] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:34:30.724 [2024-07-25 01:00:53.123065] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:30.724 BaseBdev4 00:34:30.724 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:34:30.724 01:00:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:34:30.724 01:00:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:30.724 01:00:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:30.724 01:00:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:30.724 01:00:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:30.724 01:00:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:30.981 01:00:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:30.981 [ 00:34:30.981 { 00:34:30.981 "name": "BaseBdev4", 00:34:30.981 "aliases": [ 00:34:30.981 "5c4b3654-8506-45c2-b82f-a44a518a1558" 00:34:30.981 ], 00:34:30.981 "product_name": "Malloc disk", 00:34:30.981 "block_size": 512, 00:34:30.981 "num_blocks": 65536, 00:34:30.981 "uuid": "5c4b3654-8506-45c2-b82f-a44a518a1558", 00:34:30.981 "assigned_rate_limits": { 00:34:30.981 "rw_ios_per_sec": 0, 00:34:30.981 "rw_mbytes_per_sec": 0, 00:34:30.981 "r_mbytes_per_sec": 0, 00:34:30.981 "w_mbytes_per_sec": 0 00:34:30.981 }, 00:34:30.981 "claimed": true, 00:34:30.981 "claim_type": "exclusive_write", 00:34:30.981 "zoned": false, 00:34:30.981 "supported_io_types": { 00:34:30.981 "read": true, 00:34:30.981 "write": true, 00:34:30.981 "unmap": true, 00:34:30.981 "flush": true, 00:34:30.981 "reset": true, 00:34:30.981 "nvme_admin": false, 00:34:30.981 "nvme_io": false, 00:34:30.981 "nvme_io_md": false, 00:34:30.981 "write_zeroes": true, 00:34:30.981 "zcopy": true, 00:34:30.981 "get_zone_info": false, 00:34:30.981 "zone_management": false, 00:34:30.981 "zone_append": false, 00:34:30.981 "compare": false, 00:34:30.981 "compare_and_write": false, 00:34:30.981 "abort": true, 00:34:30.981 "seek_hole": false, 00:34:30.981 "seek_data": false, 00:34:30.981 "copy": true, 00:34:30.981 "nvme_iov_md": false 00:34:30.981 }, 00:34:30.981 "memory_domains": [ 00:34:30.981 { 00:34:30.981 "dma_device_id": "system", 00:34:30.981 "dma_device_type": 1 00:34:30.981 }, 00:34:30.981 { 00:34:30.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:30.981 "dma_device_type": 2 00:34:30.981 } 00:34:30.981 ], 00:34:30.981 "driver_specific": {} 00:34:30.981 } 00:34:30.981 ] 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.240 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:31.499 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:31.499 "name": "Existed_Raid", 00:34:31.499 "uuid": "6fadd7b3-d0dd-4966-ad35-cadb62afbde9", 00:34:31.499 "strip_size_kb": 64, 00:34:31.499 "state": "online", 00:34:31.499 "raid_level": "raid5f", 00:34:31.499 "superblock": false, 00:34:31.499 "num_base_bdevs": 4, 00:34:31.499 "num_base_bdevs_discovered": 4, 00:34:31.499 "num_base_bdevs_operational": 4, 00:34:31.499 "base_bdevs_list": [ 00:34:31.499 { 00:34:31.499 "name": "BaseBdev1", 00:34:31.499 "uuid": "30eac619-b5a2-4078-a893-07c22131a50a", 00:34:31.499 "is_configured": true, 00:34:31.499 "data_offset": 0, 00:34:31.499 "data_size": 65536 00:34:31.499 }, 00:34:31.499 { 00:34:31.499 "name": "BaseBdev2", 00:34:31.499 "uuid": "eebf13ea-42cd-4b5f-8248-a511cebfb533", 00:34:31.499 "is_configured": true, 00:34:31.499 "data_offset": 0, 00:34:31.499 "data_size": 65536 00:34:31.499 }, 00:34:31.499 { 00:34:31.499 "name": "BaseBdev3", 00:34:31.499 "uuid": "99a93295-195e-4210-bf36-9088d28a7816", 00:34:31.499 "is_configured": true, 00:34:31.499 "data_offset": 0, 00:34:31.500 "data_size": 65536 00:34:31.500 }, 00:34:31.500 { 00:34:31.500 "name": "BaseBdev4", 00:34:31.500 "uuid": "5c4b3654-8506-45c2-b82f-a44a518a1558", 00:34:31.500 "is_configured": true, 00:34:31.500 "data_offset": 0, 00:34:31.500 "data_size": 65536 00:34:31.500 } 00:34:31.500 ] 00:34:31.500 }' 00:34:31.500 01:00:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:31.500 01:00:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:31.758 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:34:31.758 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:31.758 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:31.758 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:31.758 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:31.758 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:31.758 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:31.758 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:32.017 [2024-07-25 01:00:54.557126] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:32.017 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:32.017 "name": "Existed_Raid", 00:34:32.017 "aliases": [ 00:34:32.017 "6fadd7b3-d0dd-4966-ad35-cadb62afbde9" 00:34:32.017 ], 00:34:32.017 "product_name": "Raid Volume", 00:34:32.017 "block_size": 512, 00:34:32.017 "num_blocks": 196608, 00:34:32.017 "uuid": "6fadd7b3-d0dd-4966-ad35-cadb62afbde9", 00:34:32.017 "assigned_rate_limits": { 00:34:32.017 "rw_ios_per_sec": 0, 00:34:32.017 "rw_mbytes_per_sec": 0, 00:34:32.017 "r_mbytes_per_sec": 0, 00:34:32.017 "w_mbytes_per_sec": 0 00:34:32.017 }, 00:34:32.017 "claimed": false, 00:34:32.017 "zoned": false, 00:34:32.017 "supported_io_types": { 00:34:32.017 "read": true, 00:34:32.017 "write": true, 00:34:32.017 "unmap": false, 00:34:32.017 "flush": false, 00:34:32.017 "reset": true, 00:34:32.017 "nvme_admin": false, 00:34:32.017 "nvme_io": false, 00:34:32.017 "nvme_io_md": false, 00:34:32.017 "write_zeroes": true, 00:34:32.017 "zcopy": false, 00:34:32.017 "get_zone_info": false, 00:34:32.017 "zone_management": false, 00:34:32.017 "zone_append": false, 00:34:32.017 "compare": false, 00:34:32.017 "compare_and_write": false, 00:34:32.017 "abort": false, 00:34:32.017 "seek_hole": false, 00:34:32.017 "seek_data": false, 00:34:32.017 "copy": false, 00:34:32.017 "nvme_iov_md": false 00:34:32.017 }, 00:34:32.017 "driver_specific": { 00:34:32.017 "raid": { 00:34:32.017 "uuid": "6fadd7b3-d0dd-4966-ad35-cadb62afbde9", 00:34:32.017 "strip_size_kb": 64, 00:34:32.017 "state": "online", 00:34:32.017 "raid_level": "raid5f", 00:34:32.017 "superblock": false, 00:34:32.017 "num_base_bdevs": 4, 00:34:32.017 "num_base_bdevs_discovered": 4, 00:34:32.017 "num_base_bdevs_operational": 4, 00:34:32.017 "base_bdevs_list": [ 00:34:32.017 { 00:34:32.017 "name": "BaseBdev1", 00:34:32.017 "uuid": "30eac619-b5a2-4078-a893-07c22131a50a", 00:34:32.017 "is_configured": true, 00:34:32.017 "data_offset": 0, 00:34:32.017 "data_size": 65536 00:34:32.017 }, 00:34:32.017 { 00:34:32.017 "name": "BaseBdev2", 00:34:32.017 "uuid": "eebf13ea-42cd-4b5f-8248-a511cebfb533", 00:34:32.017 "is_configured": true, 00:34:32.017 "data_offset": 0, 00:34:32.017 "data_size": 65536 00:34:32.017 }, 00:34:32.017 { 00:34:32.017 "name": "BaseBdev3", 00:34:32.017 "uuid": "99a93295-195e-4210-bf36-9088d28a7816", 00:34:32.017 "is_configured": true, 00:34:32.017 "data_offset": 0, 00:34:32.017 "data_size": 65536 00:34:32.017 }, 00:34:32.017 { 00:34:32.017 "name": "BaseBdev4", 00:34:32.017 "uuid": "5c4b3654-8506-45c2-b82f-a44a518a1558", 00:34:32.017 "is_configured": true, 00:34:32.017 "data_offset": 0, 00:34:32.017 "data_size": 65536 00:34:32.017 } 00:34:32.017 ] 00:34:32.017 } 00:34:32.017 } 00:34:32.017 }' 00:34:32.017 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:32.017 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:34:32.017 BaseBdev2 00:34:32.017 BaseBdev3 00:34:32.017 BaseBdev4' 00:34:32.017 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:32.017 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:34:32.017 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:32.276 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:32.276 "name": "BaseBdev1", 00:34:32.276 "aliases": [ 00:34:32.276 "30eac619-b5a2-4078-a893-07c22131a50a" 00:34:32.276 ], 00:34:32.276 "product_name": "Malloc disk", 00:34:32.276 "block_size": 512, 00:34:32.276 "num_blocks": 65536, 00:34:32.276 "uuid": "30eac619-b5a2-4078-a893-07c22131a50a", 00:34:32.276 "assigned_rate_limits": { 00:34:32.276 "rw_ios_per_sec": 0, 00:34:32.276 "rw_mbytes_per_sec": 0, 00:34:32.276 "r_mbytes_per_sec": 0, 00:34:32.276 "w_mbytes_per_sec": 0 00:34:32.276 }, 00:34:32.276 "claimed": true, 00:34:32.276 "claim_type": "exclusive_write", 00:34:32.276 "zoned": false, 00:34:32.276 "supported_io_types": { 00:34:32.276 "read": true, 00:34:32.276 "write": true, 00:34:32.276 "unmap": true, 00:34:32.276 "flush": true, 00:34:32.276 "reset": true, 00:34:32.276 "nvme_admin": false, 00:34:32.276 "nvme_io": false, 00:34:32.276 "nvme_io_md": false, 00:34:32.276 "write_zeroes": true, 00:34:32.276 "zcopy": true, 00:34:32.276 "get_zone_info": false, 00:34:32.276 "zone_management": false, 00:34:32.277 "zone_append": false, 00:34:32.277 "compare": false, 00:34:32.277 "compare_and_write": false, 00:34:32.277 "abort": true, 00:34:32.277 "seek_hole": false, 00:34:32.277 "seek_data": false, 00:34:32.277 "copy": true, 00:34:32.277 "nvme_iov_md": false 00:34:32.277 }, 00:34:32.277 "memory_domains": [ 00:34:32.277 { 00:34:32.277 "dma_device_id": "system", 00:34:32.277 "dma_device_type": 1 00:34:32.277 }, 00:34:32.277 { 00:34:32.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:32.277 "dma_device_type": 2 00:34:32.277 } 00:34:32.277 ], 00:34:32.277 "driver_specific": {} 00:34:32.277 }' 00:34:32.277 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:32.277 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:32.277 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:32.277 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:32.536 01:00:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:32.536 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:32.536 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:32.536 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:32.536 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:32.536 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:32.536 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:32.795 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:32.795 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:32.795 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:32.795 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:33.054 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:33.054 "name": "BaseBdev2", 00:34:33.054 "aliases": [ 00:34:33.054 "eebf13ea-42cd-4b5f-8248-a511cebfb533" 00:34:33.054 ], 00:34:33.054 "product_name": "Malloc disk", 00:34:33.054 "block_size": 512, 00:34:33.054 "num_blocks": 65536, 00:34:33.054 "uuid": "eebf13ea-42cd-4b5f-8248-a511cebfb533", 00:34:33.054 "assigned_rate_limits": { 00:34:33.054 "rw_ios_per_sec": 0, 00:34:33.054 "rw_mbytes_per_sec": 0, 00:34:33.055 "r_mbytes_per_sec": 0, 00:34:33.055 "w_mbytes_per_sec": 0 00:34:33.055 }, 00:34:33.055 "claimed": true, 00:34:33.055 "claim_type": "exclusive_write", 00:34:33.055 "zoned": false, 00:34:33.055 "supported_io_types": { 00:34:33.055 "read": true, 00:34:33.055 "write": true, 00:34:33.055 "unmap": true, 00:34:33.055 "flush": true, 00:34:33.055 "reset": true, 00:34:33.055 "nvme_admin": false, 00:34:33.055 "nvme_io": false, 00:34:33.055 "nvme_io_md": false, 00:34:33.055 "write_zeroes": true, 00:34:33.055 "zcopy": true, 00:34:33.055 "get_zone_info": false, 00:34:33.055 "zone_management": false, 00:34:33.055 "zone_append": false, 00:34:33.055 "compare": false, 00:34:33.055 "compare_and_write": false, 00:34:33.055 "abort": true, 00:34:33.055 "seek_hole": false, 00:34:33.055 "seek_data": false, 00:34:33.055 "copy": true, 00:34:33.055 "nvme_iov_md": false 00:34:33.055 }, 00:34:33.055 "memory_domains": [ 00:34:33.055 { 00:34:33.055 "dma_device_id": "system", 00:34:33.055 "dma_device_type": 1 00:34:33.055 }, 00:34:33.055 { 00:34:33.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:33.055 "dma_device_type": 2 00:34:33.055 } 00:34:33.055 ], 00:34:33.055 "driver_specific": {} 00:34:33.055 }' 00:34:33.055 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:33.055 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:33.055 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:33.055 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:33.055 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:33.055 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:33.055 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:33.055 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:33.314 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:33.314 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:33.314 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:33.314 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:33.314 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:33.314 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:33.314 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:33.573 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:33.573 "name": "BaseBdev3", 00:34:33.573 "aliases": [ 00:34:33.573 "99a93295-195e-4210-bf36-9088d28a7816" 00:34:33.573 ], 00:34:33.573 "product_name": "Malloc disk", 00:34:33.573 "block_size": 512, 00:34:33.573 "num_blocks": 65536, 00:34:33.573 "uuid": "99a93295-195e-4210-bf36-9088d28a7816", 00:34:33.573 "assigned_rate_limits": { 00:34:33.573 "rw_ios_per_sec": 0, 00:34:33.573 "rw_mbytes_per_sec": 0, 00:34:33.573 "r_mbytes_per_sec": 0, 00:34:33.573 "w_mbytes_per_sec": 0 00:34:33.573 }, 00:34:33.573 "claimed": true, 00:34:33.573 "claim_type": "exclusive_write", 00:34:33.573 "zoned": false, 00:34:33.573 "supported_io_types": { 00:34:33.573 "read": true, 00:34:33.573 "write": true, 00:34:33.573 "unmap": true, 00:34:33.573 "flush": true, 00:34:33.573 "reset": true, 00:34:33.573 "nvme_admin": false, 00:34:33.573 "nvme_io": false, 00:34:33.573 "nvme_io_md": false, 00:34:33.573 "write_zeroes": true, 00:34:33.573 "zcopy": true, 00:34:33.573 "get_zone_info": false, 00:34:33.573 "zone_management": false, 00:34:33.573 "zone_append": false, 00:34:33.573 "compare": false, 00:34:33.573 "compare_and_write": false, 00:34:33.573 "abort": true, 00:34:33.573 "seek_hole": false, 00:34:33.573 "seek_data": false, 00:34:33.573 "copy": true, 00:34:33.573 "nvme_iov_md": false 00:34:33.573 }, 00:34:33.573 "memory_domains": [ 00:34:33.573 { 00:34:33.573 "dma_device_id": "system", 00:34:33.573 "dma_device_type": 1 00:34:33.573 }, 00:34:33.573 { 00:34:33.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:33.573 "dma_device_type": 2 00:34:33.573 } 00:34:33.573 ], 00:34:33.573 "driver_specific": {} 00:34:33.573 }' 00:34:33.573 01:00:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:33.573 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:33.573 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:33.573 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:33.573 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:33.573 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:33.573 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:33.573 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:33.832 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:33.832 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:33.832 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:33.832 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:33.832 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:33.832 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:34:33.832 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:34.092 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:34.092 "name": "BaseBdev4", 00:34:34.092 "aliases": [ 00:34:34.092 "5c4b3654-8506-45c2-b82f-a44a518a1558" 00:34:34.092 ], 00:34:34.092 "product_name": "Malloc disk", 00:34:34.092 "block_size": 512, 00:34:34.092 "num_blocks": 65536, 00:34:34.092 "uuid": "5c4b3654-8506-45c2-b82f-a44a518a1558", 00:34:34.092 "assigned_rate_limits": { 00:34:34.092 "rw_ios_per_sec": 0, 00:34:34.092 "rw_mbytes_per_sec": 0, 00:34:34.092 "r_mbytes_per_sec": 0, 00:34:34.092 "w_mbytes_per_sec": 0 00:34:34.092 }, 00:34:34.092 "claimed": true, 00:34:34.092 "claim_type": "exclusive_write", 00:34:34.092 "zoned": false, 00:34:34.092 "supported_io_types": { 00:34:34.093 "read": true, 00:34:34.093 "write": true, 00:34:34.093 "unmap": true, 00:34:34.093 "flush": true, 00:34:34.093 "reset": true, 00:34:34.093 "nvme_admin": false, 00:34:34.093 "nvme_io": false, 00:34:34.093 "nvme_io_md": false, 00:34:34.093 "write_zeroes": true, 00:34:34.093 "zcopy": true, 00:34:34.093 "get_zone_info": false, 00:34:34.093 "zone_management": false, 00:34:34.093 "zone_append": false, 00:34:34.093 "compare": false, 00:34:34.093 "compare_and_write": false, 00:34:34.093 "abort": true, 00:34:34.093 "seek_hole": false, 00:34:34.093 "seek_data": false, 00:34:34.093 "copy": true, 00:34:34.093 "nvme_iov_md": false 00:34:34.093 }, 00:34:34.093 "memory_domains": [ 00:34:34.093 { 00:34:34.093 "dma_device_id": "system", 00:34:34.093 "dma_device_type": 1 00:34:34.093 }, 00:34:34.093 { 00:34:34.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:34.093 "dma_device_type": 2 00:34:34.093 } 00:34:34.093 ], 00:34:34.093 "driver_specific": {} 00:34:34.093 }' 00:34:34.093 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:34.093 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:34.093 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:34.093 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:34.352 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:34.352 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:34.352 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:34.352 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:34.352 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:34.352 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:34.352 01:00:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:34.611 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:34.611 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:34.611 [2024-07-25 01:00:57.261547] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:34.870 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:34:34.870 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:34:34.870 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:34.870 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:34:34.870 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:34:34.870 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:34.870 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:34.871 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:34.871 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:34.871 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:34.871 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:34.871 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:34.871 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:34.871 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:34.871 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:34.871 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:34.871 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:35.129 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:35.129 "name": "Existed_Raid", 00:34:35.129 "uuid": "6fadd7b3-d0dd-4966-ad35-cadb62afbde9", 00:34:35.129 "strip_size_kb": 64, 00:34:35.129 "state": "online", 00:34:35.129 "raid_level": "raid5f", 00:34:35.130 "superblock": false, 00:34:35.130 "num_base_bdevs": 4, 00:34:35.130 "num_base_bdevs_discovered": 3, 00:34:35.130 "num_base_bdevs_operational": 3, 00:34:35.130 "base_bdevs_list": [ 00:34:35.130 { 00:34:35.130 "name": null, 00:34:35.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:35.130 "is_configured": false, 00:34:35.130 "data_offset": 0, 00:34:35.130 "data_size": 65536 00:34:35.130 }, 00:34:35.130 { 00:34:35.130 "name": "BaseBdev2", 00:34:35.130 "uuid": "eebf13ea-42cd-4b5f-8248-a511cebfb533", 00:34:35.130 "is_configured": true, 00:34:35.130 "data_offset": 0, 00:34:35.130 "data_size": 65536 00:34:35.130 }, 00:34:35.130 { 00:34:35.130 "name": "BaseBdev3", 00:34:35.130 "uuid": "99a93295-195e-4210-bf36-9088d28a7816", 00:34:35.130 "is_configured": true, 00:34:35.130 "data_offset": 0, 00:34:35.130 "data_size": 65536 00:34:35.130 }, 00:34:35.130 { 00:34:35.130 "name": "BaseBdev4", 00:34:35.130 "uuid": "5c4b3654-8506-45c2-b82f-a44a518a1558", 00:34:35.130 "is_configured": true, 00:34:35.130 "data_offset": 0, 00:34:35.130 "data_size": 65536 00:34:35.130 } 00:34:35.130 ] 00:34:35.130 }' 00:34:35.130 01:00:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:35.130 01:00:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.702 01:00:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:34:35.702 01:00:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:35.702 01:00:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:35.702 01:00:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:35.995 01:00:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:35.995 01:00:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:35.995 01:00:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:36.264 [2024-07-25 01:00:58.767735] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:36.264 [2024-07-25 01:00:58.767831] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:36.264 [2024-07-25 01:00:58.867874] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:36.264 01:00:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:36.264 01:00:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:36.264 01:00:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:36.264 01:00:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:36.523 01:00:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:36.523 01:00:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:36.523 01:00:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:34:36.782 [2024-07-25 01:00:59.291989] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:36.782 01:00:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:36.782 01:00:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:36.782 01:00:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:36.782 01:00:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:37.041 01:00:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:37.041 01:00:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:37.041 01:00:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:34:37.300 [2024-07-25 01:00:59.893474] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:34:37.300 [2024-07-25 01:00:59.893537] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:34:37.559 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:37.559 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:37.559 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:37.559 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:34:37.559 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:34:37.559 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:34:37.559 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:34:37.559 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:34:37.559 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:37.559 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:37.818 BaseBdev2 00:34:38.077 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:34:38.077 01:01:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:34:38.077 01:01:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:38.077 01:01:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:38.077 01:01:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:38.077 01:01:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:38.077 01:01:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:38.077 01:01:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:38.336 [ 00:34:38.336 { 00:34:38.336 "name": "BaseBdev2", 00:34:38.336 "aliases": [ 00:34:38.336 "e473ae30-0f6f-4935-bd0d-fd7352f3f758" 00:34:38.336 ], 00:34:38.336 "product_name": "Malloc disk", 00:34:38.336 "block_size": 512, 00:34:38.336 "num_blocks": 65536, 00:34:38.336 "uuid": "e473ae30-0f6f-4935-bd0d-fd7352f3f758", 00:34:38.336 "assigned_rate_limits": { 00:34:38.336 "rw_ios_per_sec": 0, 00:34:38.336 "rw_mbytes_per_sec": 0, 00:34:38.336 "r_mbytes_per_sec": 0, 00:34:38.336 "w_mbytes_per_sec": 0 00:34:38.336 }, 00:34:38.336 "claimed": false, 00:34:38.336 "zoned": false, 00:34:38.336 "supported_io_types": { 00:34:38.336 "read": true, 00:34:38.336 "write": true, 00:34:38.336 "unmap": true, 00:34:38.336 "flush": true, 00:34:38.336 "reset": true, 00:34:38.336 "nvme_admin": false, 00:34:38.336 "nvme_io": false, 00:34:38.336 "nvme_io_md": false, 00:34:38.336 "write_zeroes": true, 00:34:38.336 "zcopy": true, 00:34:38.336 "get_zone_info": false, 00:34:38.336 "zone_management": false, 00:34:38.336 "zone_append": false, 00:34:38.336 "compare": false, 00:34:38.336 "compare_and_write": false, 00:34:38.336 "abort": true, 00:34:38.337 "seek_hole": false, 00:34:38.337 "seek_data": false, 00:34:38.337 "copy": true, 00:34:38.337 "nvme_iov_md": false 00:34:38.337 }, 00:34:38.337 "memory_domains": [ 00:34:38.337 { 00:34:38.337 "dma_device_id": "system", 00:34:38.337 "dma_device_type": 1 00:34:38.337 }, 00:34:38.337 { 00:34:38.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:38.337 "dma_device_type": 2 00:34:38.337 } 00:34:38.337 ], 00:34:38.337 "driver_specific": {} 00:34:38.337 } 00:34:38.337 ] 00:34:38.337 01:01:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:38.337 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:38.337 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:38.337 01:01:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:38.595 BaseBdev3 00:34:38.595 01:01:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:34:38.595 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:34:38.595 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:38.595 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:38.595 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:38.595 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:38.595 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:38.854 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:38.854 [ 00:34:38.854 { 00:34:38.854 "name": "BaseBdev3", 00:34:38.854 "aliases": [ 00:34:38.854 "517c907e-735e-42a9-b5b3-36670cb1fe09" 00:34:38.854 ], 00:34:38.854 "product_name": "Malloc disk", 00:34:38.854 "block_size": 512, 00:34:38.854 "num_blocks": 65536, 00:34:38.854 "uuid": "517c907e-735e-42a9-b5b3-36670cb1fe09", 00:34:38.854 "assigned_rate_limits": { 00:34:38.854 "rw_ios_per_sec": 0, 00:34:38.854 "rw_mbytes_per_sec": 0, 00:34:38.854 "r_mbytes_per_sec": 0, 00:34:38.854 "w_mbytes_per_sec": 0 00:34:38.854 }, 00:34:38.854 "claimed": false, 00:34:38.854 "zoned": false, 00:34:38.854 "supported_io_types": { 00:34:38.854 "read": true, 00:34:38.854 "write": true, 00:34:38.854 "unmap": true, 00:34:38.854 "flush": true, 00:34:38.854 "reset": true, 00:34:38.854 "nvme_admin": false, 00:34:38.854 "nvme_io": false, 00:34:38.854 "nvme_io_md": false, 00:34:38.854 "write_zeroes": true, 00:34:38.854 "zcopy": true, 00:34:38.854 "get_zone_info": false, 00:34:38.854 "zone_management": false, 00:34:38.854 "zone_append": false, 00:34:38.854 "compare": false, 00:34:38.854 "compare_and_write": false, 00:34:38.854 "abort": true, 00:34:38.854 "seek_hole": false, 00:34:38.854 "seek_data": false, 00:34:38.854 "copy": true, 00:34:38.854 "nvme_iov_md": false 00:34:38.854 }, 00:34:38.854 "memory_domains": [ 00:34:38.854 { 00:34:38.854 "dma_device_id": "system", 00:34:38.854 "dma_device_type": 1 00:34:38.854 }, 00:34:38.854 { 00:34:38.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:38.854 "dma_device_type": 2 00:34:38.854 } 00:34:38.854 ], 00:34:38.854 "driver_specific": {} 00:34:38.854 } 00:34:38.854 ] 00:34:38.854 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:38.854 01:01:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:38.854 01:01:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:38.854 01:01:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:34:39.112 BaseBdev4 00:34:39.112 01:01:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:34:39.112 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:34:39.112 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:39.112 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:39.112 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:39.112 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:39.112 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:39.371 01:01:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:39.371 [ 00:34:39.371 { 00:34:39.371 "name": "BaseBdev4", 00:34:39.372 "aliases": [ 00:34:39.372 "5fe90dae-4675-4ec5-8586-5af39f2209dc" 00:34:39.372 ], 00:34:39.372 "product_name": "Malloc disk", 00:34:39.372 "block_size": 512, 00:34:39.372 "num_blocks": 65536, 00:34:39.372 "uuid": "5fe90dae-4675-4ec5-8586-5af39f2209dc", 00:34:39.372 "assigned_rate_limits": { 00:34:39.372 "rw_ios_per_sec": 0, 00:34:39.372 "rw_mbytes_per_sec": 0, 00:34:39.372 "r_mbytes_per_sec": 0, 00:34:39.372 "w_mbytes_per_sec": 0 00:34:39.372 }, 00:34:39.372 "claimed": false, 00:34:39.372 "zoned": false, 00:34:39.372 "supported_io_types": { 00:34:39.372 "read": true, 00:34:39.372 "write": true, 00:34:39.372 "unmap": true, 00:34:39.372 "flush": true, 00:34:39.372 "reset": true, 00:34:39.372 "nvme_admin": false, 00:34:39.372 "nvme_io": false, 00:34:39.372 "nvme_io_md": false, 00:34:39.372 "write_zeroes": true, 00:34:39.372 "zcopy": true, 00:34:39.372 "get_zone_info": false, 00:34:39.372 "zone_management": false, 00:34:39.372 "zone_append": false, 00:34:39.372 "compare": false, 00:34:39.372 "compare_and_write": false, 00:34:39.372 "abort": true, 00:34:39.372 "seek_hole": false, 00:34:39.372 "seek_data": false, 00:34:39.372 "copy": true, 00:34:39.372 "nvme_iov_md": false 00:34:39.372 }, 00:34:39.372 "memory_domains": [ 00:34:39.372 { 00:34:39.372 "dma_device_id": "system", 00:34:39.372 "dma_device_type": 1 00:34:39.372 }, 00:34:39.372 { 00:34:39.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:39.372 "dma_device_type": 2 00:34:39.372 } 00:34:39.372 ], 00:34:39.372 "driver_specific": {} 00:34:39.372 } 00:34:39.372 ] 00:34:39.372 01:01:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:39.372 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:39.372 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:39.372 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:39.631 [2024-07-25 01:01:02.185325] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:39.631 [2024-07-25 01:01:02.185828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:39.631 [2024-07-25 01:01:02.185868] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:39.631 [2024-07-25 01:01:02.187767] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:39.631 [2024-07-25 01:01:02.187824] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:39.631 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:39.890 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:39.890 "name": "Existed_Raid", 00:34:39.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:39.890 "strip_size_kb": 64, 00:34:39.890 "state": "configuring", 00:34:39.890 "raid_level": "raid5f", 00:34:39.890 "superblock": false, 00:34:39.890 "num_base_bdevs": 4, 00:34:39.890 "num_base_bdevs_discovered": 3, 00:34:39.890 "num_base_bdevs_operational": 4, 00:34:39.890 "base_bdevs_list": [ 00:34:39.890 { 00:34:39.890 "name": "BaseBdev1", 00:34:39.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:39.890 "is_configured": false, 00:34:39.890 "data_offset": 0, 00:34:39.890 "data_size": 0 00:34:39.890 }, 00:34:39.890 { 00:34:39.890 "name": "BaseBdev2", 00:34:39.890 "uuid": "e473ae30-0f6f-4935-bd0d-fd7352f3f758", 00:34:39.890 "is_configured": true, 00:34:39.890 "data_offset": 0, 00:34:39.890 "data_size": 65536 00:34:39.890 }, 00:34:39.890 { 00:34:39.890 "name": "BaseBdev3", 00:34:39.890 "uuid": "517c907e-735e-42a9-b5b3-36670cb1fe09", 00:34:39.890 "is_configured": true, 00:34:39.890 "data_offset": 0, 00:34:39.890 "data_size": 65536 00:34:39.890 }, 00:34:39.890 { 00:34:39.890 "name": "BaseBdev4", 00:34:39.890 "uuid": "5fe90dae-4675-4ec5-8586-5af39f2209dc", 00:34:39.890 "is_configured": true, 00:34:39.890 "data_offset": 0, 00:34:39.890 "data_size": 65536 00:34:39.890 } 00:34:39.890 ] 00:34:39.890 }' 00:34:39.890 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:39.890 01:01:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.458 01:01:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:34:40.718 [2024-07-25 01:01:03.233479] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:40.718 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.976 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:40.976 "name": "Existed_Raid", 00:34:40.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.976 "strip_size_kb": 64, 00:34:40.976 "state": "configuring", 00:34:40.976 "raid_level": "raid5f", 00:34:40.976 "superblock": false, 00:34:40.976 "num_base_bdevs": 4, 00:34:40.976 "num_base_bdevs_discovered": 2, 00:34:40.976 "num_base_bdevs_operational": 4, 00:34:40.976 "base_bdevs_list": [ 00:34:40.976 { 00:34:40.976 "name": "BaseBdev1", 00:34:40.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.976 "is_configured": false, 00:34:40.976 "data_offset": 0, 00:34:40.976 "data_size": 0 00:34:40.976 }, 00:34:40.976 { 00:34:40.976 "name": null, 00:34:40.976 "uuid": "e473ae30-0f6f-4935-bd0d-fd7352f3f758", 00:34:40.976 "is_configured": false, 00:34:40.976 "data_offset": 0, 00:34:40.976 "data_size": 65536 00:34:40.976 }, 00:34:40.976 { 00:34:40.976 "name": "BaseBdev3", 00:34:40.976 "uuid": "517c907e-735e-42a9-b5b3-36670cb1fe09", 00:34:40.976 "is_configured": true, 00:34:40.976 "data_offset": 0, 00:34:40.976 "data_size": 65536 00:34:40.976 }, 00:34:40.976 { 00:34:40.976 "name": "BaseBdev4", 00:34:40.976 "uuid": "5fe90dae-4675-4ec5-8586-5af39f2209dc", 00:34:40.976 "is_configured": true, 00:34:40.976 "data_offset": 0, 00:34:40.976 "data_size": 65536 00:34:40.976 } 00:34:40.976 ] 00:34:40.976 }' 00:34:40.976 01:01:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:40.976 01:01:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.542 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.542 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:41.801 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:34:41.801 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:42.060 [2024-07-25 01:01:04.484019] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:42.060 BaseBdev1 00:34:42.060 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:34:42.060 01:01:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:42.060 01:01:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:42.060 01:01:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:42.060 01:01:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:42.060 01:01:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:42.060 01:01:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:42.061 01:01:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:42.319 [ 00:34:42.319 { 00:34:42.319 "name": "BaseBdev1", 00:34:42.319 "aliases": [ 00:34:42.319 "ed1859cf-c586-4195-b2fa-04d44670b878" 00:34:42.319 ], 00:34:42.319 "product_name": "Malloc disk", 00:34:42.319 "block_size": 512, 00:34:42.319 "num_blocks": 65536, 00:34:42.319 "uuid": "ed1859cf-c586-4195-b2fa-04d44670b878", 00:34:42.319 "assigned_rate_limits": { 00:34:42.319 "rw_ios_per_sec": 0, 00:34:42.319 "rw_mbytes_per_sec": 0, 00:34:42.319 "r_mbytes_per_sec": 0, 00:34:42.319 "w_mbytes_per_sec": 0 00:34:42.319 }, 00:34:42.319 "claimed": true, 00:34:42.319 "claim_type": "exclusive_write", 00:34:42.319 "zoned": false, 00:34:42.319 "supported_io_types": { 00:34:42.319 "read": true, 00:34:42.319 "write": true, 00:34:42.319 "unmap": true, 00:34:42.319 "flush": true, 00:34:42.319 "reset": true, 00:34:42.319 "nvme_admin": false, 00:34:42.320 "nvme_io": false, 00:34:42.320 "nvme_io_md": false, 00:34:42.320 "write_zeroes": true, 00:34:42.320 "zcopy": true, 00:34:42.320 "get_zone_info": false, 00:34:42.320 "zone_management": false, 00:34:42.320 "zone_append": false, 00:34:42.320 "compare": false, 00:34:42.320 "compare_and_write": false, 00:34:42.320 "abort": true, 00:34:42.320 "seek_hole": false, 00:34:42.320 "seek_data": false, 00:34:42.320 "copy": true, 00:34:42.320 "nvme_iov_md": false 00:34:42.320 }, 00:34:42.320 "memory_domains": [ 00:34:42.320 { 00:34:42.320 "dma_device_id": "system", 00:34:42.320 "dma_device_type": 1 00:34:42.320 }, 00:34:42.320 { 00:34:42.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:42.320 "dma_device_type": 2 00:34:42.320 } 00:34:42.320 ], 00:34:42.320 "driver_specific": {} 00:34:42.320 } 00:34:42.320 ] 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:42.320 01:01:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:42.579 01:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:42.579 "name": "Existed_Raid", 00:34:42.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:42.579 "strip_size_kb": 64, 00:34:42.579 "state": "configuring", 00:34:42.579 "raid_level": "raid5f", 00:34:42.579 "superblock": false, 00:34:42.579 "num_base_bdevs": 4, 00:34:42.579 "num_base_bdevs_discovered": 3, 00:34:42.579 "num_base_bdevs_operational": 4, 00:34:42.579 "base_bdevs_list": [ 00:34:42.579 { 00:34:42.579 "name": "BaseBdev1", 00:34:42.579 "uuid": "ed1859cf-c586-4195-b2fa-04d44670b878", 00:34:42.579 "is_configured": true, 00:34:42.579 "data_offset": 0, 00:34:42.579 "data_size": 65536 00:34:42.579 }, 00:34:42.579 { 00:34:42.579 "name": null, 00:34:42.579 "uuid": "e473ae30-0f6f-4935-bd0d-fd7352f3f758", 00:34:42.579 "is_configured": false, 00:34:42.579 "data_offset": 0, 00:34:42.579 "data_size": 65536 00:34:42.579 }, 00:34:42.579 { 00:34:42.579 "name": "BaseBdev3", 00:34:42.579 "uuid": "517c907e-735e-42a9-b5b3-36670cb1fe09", 00:34:42.579 "is_configured": true, 00:34:42.579 "data_offset": 0, 00:34:42.579 "data_size": 65536 00:34:42.579 }, 00:34:42.579 { 00:34:42.579 "name": "BaseBdev4", 00:34:42.579 "uuid": "5fe90dae-4675-4ec5-8586-5af39f2209dc", 00:34:42.579 "is_configured": true, 00:34:42.579 "data_offset": 0, 00:34:42.579 "data_size": 65536 00:34:42.579 } 00:34:42.579 ] 00:34:42.579 }' 00:34:42.579 01:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:42.579 01:01:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:43.147 01:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:43.147 01:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:43.424 01:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:34:43.424 01:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:34:43.424 [2024-07-25 01:01:06.000304] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:43.424 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:43.684 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:43.684 "name": "Existed_Raid", 00:34:43.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.684 "strip_size_kb": 64, 00:34:43.684 "state": "configuring", 00:34:43.684 "raid_level": "raid5f", 00:34:43.684 "superblock": false, 00:34:43.684 "num_base_bdevs": 4, 00:34:43.684 "num_base_bdevs_discovered": 2, 00:34:43.684 "num_base_bdevs_operational": 4, 00:34:43.684 "base_bdevs_list": [ 00:34:43.684 { 00:34:43.684 "name": "BaseBdev1", 00:34:43.684 "uuid": "ed1859cf-c586-4195-b2fa-04d44670b878", 00:34:43.684 "is_configured": true, 00:34:43.684 "data_offset": 0, 00:34:43.684 "data_size": 65536 00:34:43.684 }, 00:34:43.684 { 00:34:43.684 "name": null, 00:34:43.684 "uuid": "e473ae30-0f6f-4935-bd0d-fd7352f3f758", 00:34:43.684 "is_configured": false, 00:34:43.684 "data_offset": 0, 00:34:43.684 "data_size": 65536 00:34:43.684 }, 00:34:43.684 { 00:34:43.684 "name": null, 00:34:43.684 "uuid": "517c907e-735e-42a9-b5b3-36670cb1fe09", 00:34:43.684 "is_configured": false, 00:34:43.684 "data_offset": 0, 00:34:43.684 "data_size": 65536 00:34:43.684 }, 00:34:43.684 { 00:34:43.684 "name": "BaseBdev4", 00:34:43.684 "uuid": "5fe90dae-4675-4ec5-8586-5af39f2209dc", 00:34:43.684 "is_configured": true, 00:34:43.684 "data_offset": 0, 00:34:43.684 "data_size": 65536 00:34:43.684 } 00:34:43.684 ] 00:34:43.684 }' 00:34:43.684 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:43.684 01:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:44.251 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:44.251 01:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:44.509 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:34:44.509 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:44.767 [2024-07-25 01:01:07.284560] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:44.767 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:45.026 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:45.026 "name": "Existed_Raid", 00:34:45.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:45.026 "strip_size_kb": 64, 00:34:45.026 "state": "configuring", 00:34:45.026 "raid_level": "raid5f", 00:34:45.026 "superblock": false, 00:34:45.026 "num_base_bdevs": 4, 00:34:45.026 "num_base_bdevs_discovered": 3, 00:34:45.026 "num_base_bdevs_operational": 4, 00:34:45.026 "base_bdevs_list": [ 00:34:45.026 { 00:34:45.026 "name": "BaseBdev1", 00:34:45.026 "uuid": "ed1859cf-c586-4195-b2fa-04d44670b878", 00:34:45.026 "is_configured": true, 00:34:45.026 "data_offset": 0, 00:34:45.026 "data_size": 65536 00:34:45.026 }, 00:34:45.026 { 00:34:45.026 "name": null, 00:34:45.026 "uuid": "e473ae30-0f6f-4935-bd0d-fd7352f3f758", 00:34:45.026 "is_configured": false, 00:34:45.026 "data_offset": 0, 00:34:45.026 "data_size": 65536 00:34:45.026 }, 00:34:45.026 { 00:34:45.026 "name": "BaseBdev3", 00:34:45.026 "uuid": "517c907e-735e-42a9-b5b3-36670cb1fe09", 00:34:45.026 "is_configured": true, 00:34:45.026 "data_offset": 0, 00:34:45.026 "data_size": 65536 00:34:45.026 }, 00:34:45.026 { 00:34:45.026 "name": "BaseBdev4", 00:34:45.026 "uuid": "5fe90dae-4675-4ec5-8586-5af39f2209dc", 00:34:45.026 "is_configured": true, 00:34:45.026 "data_offset": 0, 00:34:45.026 "data_size": 65536 00:34:45.026 } 00:34:45.026 ] 00:34:45.026 }' 00:34:45.026 01:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:45.026 01:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.594 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:45.595 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:45.853 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:34:45.853 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:45.853 [2024-07-25 01:01:08.476813] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:46.112 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:46.371 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:46.371 "name": "Existed_Raid", 00:34:46.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:46.371 "strip_size_kb": 64, 00:34:46.371 "state": "configuring", 00:34:46.371 "raid_level": "raid5f", 00:34:46.371 "superblock": false, 00:34:46.371 "num_base_bdevs": 4, 00:34:46.371 "num_base_bdevs_discovered": 2, 00:34:46.371 "num_base_bdevs_operational": 4, 00:34:46.371 "base_bdevs_list": [ 00:34:46.371 { 00:34:46.371 "name": null, 00:34:46.371 "uuid": "ed1859cf-c586-4195-b2fa-04d44670b878", 00:34:46.371 "is_configured": false, 00:34:46.371 "data_offset": 0, 00:34:46.371 "data_size": 65536 00:34:46.371 }, 00:34:46.371 { 00:34:46.371 "name": null, 00:34:46.371 "uuid": "e473ae30-0f6f-4935-bd0d-fd7352f3f758", 00:34:46.371 "is_configured": false, 00:34:46.371 "data_offset": 0, 00:34:46.371 "data_size": 65536 00:34:46.371 }, 00:34:46.371 { 00:34:46.371 "name": "BaseBdev3", 00:34:46.371 "uuid": "517c907e-735e-42a9-b5b3-36670cb1fe09", 00:34:46.371 "is_configured": true, 00:34:46.371 "data_offset": 0, 00:34:46.371 "data_size": 65536 00:34:46.371 }, 00:34:46.371 { 00:34:46.371 "name": "BaseBdev4", 00:34:46.371 "uuid": "5fe90dae-4675-4ec5-8586-5af39f2209dc", 00:34:46.371 "is_configured": true, 00:34:46.371 "data_offset": 0, 00:34:46.371 "data_size": 65536 00:34:46.371 } 00:34:46.371 ] 00:34:46.371 }' 00:34:46.371 01:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:46.371 01:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:46.938 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:46.938 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:47.196 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:34:47.196 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:47.455 [2024-07-25 01:01:09.887349] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:47.455 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:47.455 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:47.455 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:47.455 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:47.455 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:47.455 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:47.455 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:47.455 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:47.456 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:47.456 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:47.456 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:47.456 01:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:47.456 01:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:47.456 "name": "Existed_Raid", 00:34:47.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:47.456 "strip_size_kb": 64, 00:34:47.456 "state": "configuring", 00:34:47.456 "raid_level": "raid5f", 00:34:47.456 "superblock": false, 00:34:47.456 "num_base_bdevs": 4, 00:34:47.456 "num_base_bdevs_discovered": 3, 00:34:47.456 "num_base_bdevs_operational": 4, 00:34:47.456 "base_bdevs_list": [ 00:34:47.456 { 00:34:47.456 "name": null, 00:34:47.456 "uuid": "ed1859cf-c586-4195-b2fa-04d44670b878", 00:34:47.456 "is_configured": false, 00:34:47.456 "data_offset": 0, 00:34:47.456 "data_size": 65536 00:34:47.456 }, 00:34:47.456 { 00:34:47.456 "name": "BaseBdev2", 00:34:47.456 "uuid": "e473ae30-0f6f-4935-bd0d-fd7352f3f758", 00:34:47.456 "is_configured": true, 00:34:47.456 "data_offset": 0, 00:34:47.456 "data_size": 65536 00:34:47.456 }, 00:34:47.456 { 00:34:47.456 "name": "BaseBdev3", 00:34:47.456 "uuid": "517c907e-735e-42a9-b5b3-36670cb1fe09", 00:34:47.456 "is_configured": true, 00:34:47.456 "data_offset": 0, 00:34:47.456 "data_size": 65536 00:34:47.456 }, 00:34:47.456 { 00:34:47.456 "name": "BaseBdev4", 00:34:47.456 "uuid": "5fe90dae-4675-4ec5-8586-5af39f2209dc", 00:34:47.456 "is_configured": true, 00:34:47.456 "data_offset": 0, 00:34:47.456 "data_size": 65536 00:34:47.456 } 00:34:47.456 ] 00:34:47.456 }' 00:34:47.456 01:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:47.456 01:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:48.024 01:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:48.024 01:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:48.283 01:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:34:48.283 01:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:48.283 01:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:48.542 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ed1859cf-c586-4195-b2fa-04d44670b878 00:34:48.800 [2024-07-25 01:01:11.406072] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:48.801 [2024-07-25 01:01:11.406123] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:34:48.801 [2024-07-25 01:01:11.406139] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:34:48.801 [2024-07-25 01:01:11.406258] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:48.801 [2024-07-25 01:01:11.413151] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:34:48.801 [2024-07-25 01:01:11.413175] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:34:48.801 [2024-07-25 01:01:11.413398] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:48.801 NewBaseBdev 00:34:48.801 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:34:48.801 01:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:34:48.801 01:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:48.801 01:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:48.801 01:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:48.801 01:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:48.801 01:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:49.060 01:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:49.319 [ 00:34:49.319 { 00:34:49.319 "name": "NewBaseBdev", 00:34:49.319 "aliases": [ 00:34:49.319 "ed1859cf-c586-4195-b2fa-04d44670b878" 00:34:49.319 ], 00:34:49.319 "product_name": "Malloc disk", 00:34:49.319 "block_size": 512, 00:34:49.319 "num_blocks": 65536, 00:34:49.319 "uuid": "ed1859cf-c586-4195-b2fa-04d44670b878", 00:34:49.319 "assigned_rate_limits": { 00:34:49.319 "rw_ios_per_sec": 0, 00:34:49.319 "rw_mbytes_per_sec": 0, 00:34:49.319 "r_mbytes_per_sec": 0, 00:34:49.319 "w_mbytes_per_sec": 0 00:34:49.319 }, 00:34:49.319 "claimed": true, 00:34:49.319 "claim_type": "exclusive_write", 00:34:49.319 "zoned": false, 00:34:49.319 "supported_io_types": { 00:34:49.319 "read": true, 00:34:49.319 "write": true, 00:34:49.319 "unmap": true, 00:34:49.319 "flush": true, 00:34:49.319 "reset": true, 00:34:49.319 "nvme_admin": false, 00:34:49.319 "nvme_io": false, 00:34:49.319 "nvme_io_md": false, 00:34:49.319 "write_zeroes": true, 00:34:49.319 "zcopy": true, 00:34:49.319 "get_zone_info": false, 00:34:49.319 "zone_management": false, 00:34:49.319 "zone_append": false, 00:34:49.319 "compare": false, 00:34:49.319 "compare_and_write": false, 00:34:49.319 "abort": true, 00:34:49.319 "seek_hole": false, 00:34:49.319 "seek_data": false, 00:34:49.319 "copy": true, 00:34:49.319 "nvme_iov_md": false 00:34:49.319 }, 00:34:49.319 "memory_domains": [ 00:34:49.319 { 00:34:49.319 "dma_device_id": "system", 00:34:49.319 "dma_device_type": 1 00:34:49.319 }, 00:34:49.319 { 00:34:49.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:49.319 "dma_device_type": 2 00:34:49.319 } 00:34:49.319 ], 00:34:49.319 "driver_specific": {} 00:34:49.319 } 00:34:49.319 ] 00:34:49.319 01:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:49.319 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:34:49.319 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:49.319 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:49.319 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:49.319 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:49.320 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:49.320 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:49.320 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:49.320 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:49.320 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:49.320 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:49.320 01:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:49.579 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:49.579 "name": "Existed_Raid", 00:34:49.579 "uuid": "c8196c19-ac47-43e7-94e0-e7c77b7e1d9a", 00:34:49.579 "strip_size_kb": 64, 00:34:49.579 "state": "online", 00:34:49.579 "raid_level": "raid5f", 00:34:49.579 "superblock": false, 00:34:49.579 "num_base_bdevs": 4, 00:34:49.579 "num_base_bdevs_discovered": 4, 00:34:49.579 "num_base_bdevs_operational": 4, 00:34:49.579 "base_bdevs_list": [ 00:34:49.579 { 00:34:49.579 "name": "NewBaseBdev", 00:34:49.579 "uuid": "ed1859cf-c586-4195-b2fa-04d44670b878", 00:34:49.579 "is_configured": true, 00:34:49.579 "data_offset": 0, 00:34:49.579 "data_size": 65536 00:34:49.579 }, 00:34:49.579 { 00:34:49.579 "name": "BaseBdev2", 00:34:49.579 "uuid": "e473ae30-0f6f-4935-bd0d-fd7352f3f758", 00:34:49.579 "is_configured": true, 00:34:49.579 "data_offset": 0, 00:34:49.579 "data_size": 65536 00:34:49.579 }, 00:34:49.579 { 00:34:49.579 "name": "BaseBdev3", 00:34:49.579 "uuid": "517c907e-735e-42a9-b5b3-36670cb1fe09", 00:34:49.579 "is_configured": true, 00:34:49.579 "data_offset": 0, 00:34:49.579 "data_size": 65536 00:34:49.579 }, 00:34:49.579 { 00:34:49.579 "name": "BaseBdev4", 00:34:49.579 "uuid": "5fe90dae-4675-4ec5-8586-5af39f2209dc", 00:34:49.579 "is_configured": true, 00:34:49.579 "data_offset": 0, 00:34:49.579 "data_size": 65536 00:34:49.579 } 00:34:49.579 ] 00:34:49.579 }' 00:34:49.579 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:49.579 01:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.148 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:34:50.148 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:50.148 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:50.148 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:50.148 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:50.148 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:50.148 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:50.148 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:50.407 [2024-07-25 01:01:12.871157] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:50.408 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:50.408 "name": "Existed_Raid", 00:34:50.408 "aliases": [ 00:34:50.408 "c8196c19-ac47-43e7-94e0-e7c77b7e1d9a" 00:34:50.408 ], 00:34:50.408 "product_name": "Raid Volume", 00:34:50.408 "block_size": 512, 00:34:50.408 "num_blocks": 196608, 00:34:50.408 "uuid": "c8196c19-ac47-43e7-94e0-e7c77b7e1d9a", 00:34:50.408 "assigned_rate_limits": { 00:34:50.408 "rw_ios_per_sec": 0, 00:34:50.408 "rw_mbytes_per_sec": 0, 00:34:50.408 "r_mbytes_per_sec": 0, 00:34:50.408 "w_mbytes_per_sec": 0 00:34:50.408 }, 00:34:50.408 "claimed": false, 00:34:50.408 "zoned": false, 00:34:50.408 "supported_io_types": { 00:34:50.408 "read": true, 00:34:50.408 "write": true, 00:34:50.408 "unmap": false, 00:34:50.408 "flush": false, 00:34:50.408 "reset": true, 00:34:50.408 "nvme_admin": false, 00:34:50.408 "nvme_io": false, 00:34:50.408 "nvme_io_md": false, 00:34:50.408 "write_zeroes": true, 00:34:50.408 "zcopy": false, 00:34:50.408 "get_zone_info": false, 00:34:50.408 "zone_management": false, 00:34:50.408 "zone_append": false, 00:34:50.408 "compare": false, 00:34:50.408 "compare_and_write": false, 00:34:50.408 "abort": false, 00:34:50.408 "seek_hole": false, 00:34:50.408 "seek_data": false, 00:34:50.408 "copy": false, 00:34:50.408 "nvme_iov_md": false 00:34:50.408 }, 00:34:50.408 "driver_specific": { 00:34:50.408 "raid": { 00:34:50.408 "uuid": "c8196c19-ac47-43e7-94e0-e7c77b7e1d9a", 00:34:50.408 "strip_size_kb": 64, 00:34:50.408 "state": "online", 00:34:50.408 "raid_level": "raid5f", 00:34:50.408 "superblock": false, 00:34:50.408 "num_base_bdevs": 4, 00:34:50.408 "num_base_bdevs_discovered": 4, 00:34:50.408 "num_base_bdevs_operational": 4, 00:34:50.408 "base_bdevs_list": [ 00:34:50.408 { 00:34:50.408 "name": "NewBaseBdev", 00:34:50.408 "uuid": "ed1859cf-c586-4195-b2fa-04d44670b878", 00:34:50.408 "is_configured": true, 00:34:50.408 "data_offset": 0, 00:34:50.408 "data_size": 65536 00:34:50.408 }, 00:34:50.408 { 00:34:50.408 "name": "BaseBdev2", 00:34:50.408 "uuid": "e473ae30-0f6f-4935-bd0d-fd7352f3f758", 00:34:50.408 "is_configured": true, 00:34:50.408 "data_offset": 0, 00:34:50.408 "data_size": 65536 00:34:50.408 }, 00:34:50.408 { 00:34:50.408 "name": "BaseBdev3", 00:34:50.408 "uuid": "517c907e-735e-42a9-b5b3-36670cb1fe09", 00:34:50.408 "is_configured": true, 00:34:50.408 "data_offset": 0, 00:34:50.408 "data_size": 65536 00:34:50.408 }, 00:34:50.408 { 00:34:50.408 "name": "BaseBdev4", 00:34:50.408 "uuid": "5fe90dae-4675-4ec5-8586-5af39f2209dc", 00:34:50.408 "is_configured": true, 00:34:50.408 "data_offset": 0, 00:34:50.408 "data_size": 65536 00:34:50.408 } 00:34:50.408 ] 00:34:50.408 } 00:34:50.408 } 00:34:50.408 }' 00:34:50.408 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:50.408 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:34:50.408 BaseBdev2 00:34:50.408 BaseBdev3 00:34:50.408 BaseBdev4' 00:34:50.408 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:50.408 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:50.408 01:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:34:50.667 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:50.667 "name": "NewBaseBdev", 00:34:50.667 "aliases": [ 00:34:50.667 "ed1859cf-c586-4195-b2fa-04d44670b878" 00:34:50.667 ], 00:34:50.667 "product_name": "Malloc disk", 00:34:50.667 "block_size": 512, 00:34:50.667 "num_blocks": 65536, 00:34:50.667 "uuid": "ed1859cf-c586-4195-b2fa-04d44670b878", 00:34:50.667 "assigned_rate_limits": { 00:34:50.667 "rw_ios_per_sec": 0, 00:34:50.667 "rw_mbytes_per_sec": 0, 00:34:50.667 "r_mbytes_per_sec": 0, 00:34:50.667 "w_mbytes_per_sec": 0 00:34:50.667 }, 00:34:50.667 "claimed": true, 00:34:50.667 "claim_type": "exclusive_write", 00:34:50.667 "zoned": false, 00:34:50.667 "supported_io_types": { 00:34:50.667 "read": true, 00:34:50.667 "write": true, 00:34:50.667 "unmap": true, 00:34:50.667 "flush": true, 00:34:50.667 "reset": true, 00:34:50.667 "nvme_admin": false, 00:34:50.667 "nvme_io": false, 00:34:50.667 "nvme_io_md": false, 00:34:50.667 "write_zeroes": true, 00:34:50.667 "zcopy": true, 00:34:50.667 "get_zone_info": false, 00:34:50.667 "zone_management": false, 00:34:50.667 "zone_append": false, 00:34:50.667 "compare": false, 00:34:50.667 "compare_and_write": false, 00:34:50.667 "abort": true, 00:34:50.667 "seek_hole": false, 00:34:50.667 "seek_data": false, 00:34:50.667 "copy": true, 00:34:50.667 "nvme_iov_md": false 00:34:50.667 }, 00:34:50.667 "memory_domains": [ 00:34:50.667 { 00:34:50.667 "dma_device_id": "system", 00:34:50.667 "dma_device_type": 1 00:34:50.667 }, 00:34:50.667 { 00:34:50.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:50.667 "dma_device_type": 2 00:34:50.667 } 00:34:50.667 ], 00:34:50.667 "driver_specific": {} 00:34:50.667 }' 00:34:50.667 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:50.668 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:50.668 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:50.668 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:50.926 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:50.926 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:50.926 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:50.926 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:50.926 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:50.926 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:50.926 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:50.926 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:50.926 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:50.926 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:50.926 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:51.185 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:51.185 "name": "BaseBdev2", 00:34:51.185 "aliases": [ 00:34:51.185 "e473ae30-0f6f-4935-bd0d-fd7352f3f758" 00:34:51.185 ], 00:34:51.185 "product_name": "Malloc disk", 00:34:51.185 "block_size": 512, 00:34:51.185 "num_blocks": 65536, 00:34:51.185 "uuid": "e473ae30-0f6f-4935-bd0d-fd7352f3f758", 00:34:51.185 "assigned_rate_limits": { 00:34:51.185 "rw_ios_per_sec": 0, 00:34:51.185 "rw_mbytes_per_sec": 0, 00:34:51.185 "r_mbytes_per_sec": 0, 00:34:51.185 "w_mbytes_per_sec": 0 00:34:51.185 }, 00:34:51.185 "claimed": true, 00:34:51.185 "claim_type": "exclusive_write", 00:34:51.185 "zoned": false, 00:34:51.185 "supported_io_types": { 00:34:51.185 "read": true, 00:34:51.185 "write": true, 00:34:51.185 "unmap": true, 00:34:51.185 "flush": true, 00:34:51.185 "reset": true, 00:34:51.185 "nvme_admin": false, 00:34:51.185 "nvme_io": false, 00:34:51.185 "nvme_io_md": false, 00:34:51.185 "write_zeroes": true, 00:34:51.185 "zcopy": true, 00:34:51.185 "get_zone_info": false, 00:34:51.185 "zone_management": false, 00:34:51.185 "zone_append": false, 00:34:51.185 "compare": false, 00:34:51.185 "compare_and_write": false, 00:34:51.185 "abort": true, 00:34:51.185 "seek_hole": false, 00:34:51.185 "seek_data": false, 00:34:51.185 "copy": true, 00:34:51.185 "nvme_iov_md": false 00:34:51.185 }, 00:34:51.185 "memory_domains": [ 00:34:51.185 { 00:34:51.185 "dma_device_id": "system", 00:34:51.185 "dma_device_type": 1 00:34:51.185 }, 00:34:51.185 { 00:34:51.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:51.185 "dma_device_type": 2 00:34:51.185 } 00:34:51.185 ], 00:34:51.185 "driver_specific": {} 00:34:51.185 }' 00:34:51.185 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.444 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.444 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:51.444 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.444 01:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.444 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:51.444 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.444 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.704 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:51.704 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.704 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.704 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:51.704 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:51.704 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:51.704 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:51.975 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:51.975 "name": "BaseBdev3", 00:34:51.975 "aliases": [ 00:34:51.975 "517c907e-735e-42a9-b5b3-36670cb1fe09" 00:34:51.975 ], 00:34:51.975 "product_name": "Malloc disk", 00:34:51.975 "block_size": 512, 00:34:51.975 "num_blocks": 65536, 00:34:51.975 "uuid": "517c907e-735e-42a9-b5b3-36670cb1fe09", 00:34:51.975 "assigned_rate_limits": { 00:34:51.975 "rw_ios_per_sec": 0, 00:34:51.975 "rw_mbytes_per_sec": 0, 00:34:51.975 "r_mbytes_per_sec": 0, 00:34:51.975 "w_mbytes_per_sec": 0 00:34:51.975 }, 00:34:51.975 "claimed": true, 00:34:51.975 "claim_type": "exclusive_write", 00:34:51.975 "zoned": false, 00:34:51.975 "supported_io_types": { 00:34:51.975 "read": true, 00:34:51.975 "write": true, 00:34:51.975 "unmap": true, 00:34:51.975 "flush": true, 00:34:51.975 "reset": true, 00:34:51.975 "nvme_admin": false, 00:34:51.975 "nvme_io": false, 00:34:51.975 "nvme_io_md": false, 00:34:51.975 "write_zeroes": true, 00:34:51.975 "zcopy": true, 00:34:51.975 "get_zone_info": false, 00:34:51.975 "zone_management": false, 00:34:51.975 "zone_append": false, 00:34:51.975 "compare": false, 00:34:51.975 "compare_and_write": false, 00:34:51.975 "abort": true, 00:34:51.975 "seek_hole": false, 00:34:51.975 "seek_data": false, 00:34:51.975 "copy": true, 00:34:51.975 "nvme_iov_md": false 00:34:51.975 }, 00:34:51.975 "memory_domains": [ 00:34:51.975 { 00:34:51.975 "dma_device_id": "system", 00:34:51.975 "dma_device_type": 1 00:34:51.975 }, 00:34:51.975 { 00:34:51.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:51.975 "dma_device_type": 2 00:34:51.975 } 00:34:51.975 ], 00:34:51.975 "driver_specific": {} 00:34:51.975 }' 00:34:51.975 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.975 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.975 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:51.975 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.975 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:52.297 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:52.297 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:52.297 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:52.297 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:52.297 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:52.297 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:52.297 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:52.297 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:52.297 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:34:52.297 01:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:52.557 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:52.557 "name": "BaseBdev4", 00:34:52.557 "aliases": [ 00:34:52.557 "5fe90dae-4675-4ec5-8586-5af39f2209dc" 00:34:52.557 ], 00:34:52.557 "product_name": "Malloc disk", 00:34:52.557 "block_size": 512, 00:34:52.557 "num_blocks": 65536, 00:34:52.557 "uuid": "5fe90dae-4675-4ec5-8586-5af39f2209dc", 00:34:52.557 "assigned_rate_limits": { 00:34:52.557 "rw_ios_per_sec": 0, 00:34:52.557 "rw_mbytes_per_sec": 0, 00:34:52.557 "r_mbytes_per_sec": 0, 00:34:52.557 "w_mbytes_per_sec": 0 00:34:52.557 }, 00:34:52.557 "claimed": true, 00:34:52.557 "claim_type": "exclusive_write", 00:34:52.557 "zoned": false, 00:34:52.557 "supported_io_types": { 00:34:52.557 "read": true, 00:34:52.557 "write": true, 00:34:52.557 "unmap": true, 00:34:52.557 "flush": true, 00:34:52.557 "reset": true, 00:34:52.557 "nvme_admin": false, 00:34:52.557 "nvme_io": false, 00:34:52.557 "nvme_io_md": false, 00:34:52.557 "write_zeroes": true, 00:34:52.557 "zcopy": true, 00:34:52.557 "get_zone_info": false, 00:34:52.557 "zone_management": false, 00:34:52.557 "zone_append": false, 00:34:52.557 "compare": false, 00:34:52.557 "compare_and_write": false, 00:34:52.557 "abort": true, 00:34:52.557 "seek_hole": false, 00:34:52.557 "seek_data": false, 00:34:52.557 "copy": true, 00:34:52.557 "nvme_iov_md": false 00:34:52.557 }, 00:34:52.557 "memory_domains": [ 00:34:52.557 { 00:34:52.557 "dma_device_id": "system", 00:34:52.557 "dma_device_type": 1 00:34:52.557 }, 00:34:52.557 { 00:34:52.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:52.557 "dma_device_type": 2 00:34:52.557 } 00:34:52.557 ], 00:34:52.557 "driver_specific": {} 00:34:52.557 }' 00:34:52.557 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:52.557 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:52.558 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:52.558 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:52.558 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:52.816 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:52.816 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:52.816 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:52.816 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:52.816 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:52.816 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:52.816 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:52.816 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:53.076 [2024-07-25 01:01:15.684817] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:53.076 [2024-07-25 01:01:15.684957] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:53.076 [2024-07-25 01:01:15.685159] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:53.076 [2024-07-25 01:01:15.685434] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:53.076 [2024-07-25 01:01:15.685522] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:34:53.076 01:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 154521 00:34:53.076 01:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 154521 ']' 00:34:53.076 01:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 154521 00:34:53.076 01:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:34:53.076 01:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:53.076 01:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 154521 00:34:53.335 killing process with pid 154521 00:34:53.335 01:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:53.335 01:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:53.335 01:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 154521' 00:34:53.335 01:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 154521 00:34:53.335 01:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 154521 00:34:53.335 [2024-07-25 01:01:15.733059] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:53.594 [2024-07-25 01:01:16.136935] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:54.975 ************************************ 00:34:54.975 END TEST raid5f_state_function_test 00:34:54.975 ************************************ 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:34:54.975 00:34:54.975 real 0m32.477s 00:34:54.975 user 0m58.766s 00:34:54.975 sys 0m4.721s 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.975 01:01:17 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:34:54.975 01:01:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:34:54.975 01:01:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:54.975 01:01:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:54.975 ************************************ 00:34:54.975 START TEST raid5f_state_function_test_sb 00:34:54.975 ************************************ 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 true 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=155592 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:54.975 Process raid pid: 155592 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 155592' 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 155592 /var/tmp/spdk-raid.sock 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 155592 ']' 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:54.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:54.975 01:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:54.975 [2024-07-25 01:01:17.602600] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:34:54.975 [2024-07-25 01:01:17.603345] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.235 [2024-07-25 01:01:17.768661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.494 [2024-07-25 01:01:18.036801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.754 [2024-07-25 01:01:18.241853] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:56.013 01:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:56.013 01:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:34:56.013 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:56.273 [2024-07-25 01:01:18.826397] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:56.273 [2024-07-25 01:01:18.826635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:56.273 [2024-07-25 01:01:18.826746] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:56.273 [2024-07-25 01:01:18.826818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:56.273 [2024-07-25 01:01:18.827020] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:56.273 [2024-07-25 01:01:18.827076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:56.273 [2024-07-25 01:01:18.827107] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:56.273 [2024-07-25 01:01:18.827222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:56.273 01:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:56.532 01:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:56.532 "name": "Existed_Raid", 00:34:56.532 "uuid": "38365071-dbd5-43e5-bd01-3c8bd9ca2272", 00:34:56.532 "strip_size_kb": 64, 00:34:56.532 "state": "configuring", 00:34:56.532 "raid_level": "raid5f", 00:34:56.532 "superblock": true, 00:34:56.532 "num_base_bdevs": 4, 00:34:56.532 "num_base_bdevs_discovered": 0, 00:34:56.532 "num_base_bdevs_operational": 4, 00:34:56.532 "base_bdevs_list": [ 00:34:56.532 { 00:34:56.532 "name": "BaseBdev1", 00:34:56.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.532 "is_configured": false, 00:34:56.532 "data_offset": 0, 00:34:56.532 "data_size": 0 00:34:56.532 }, 00:34:56.532 { 00:34:56.532 "name": "BaseBdev2", 00:34:56.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.532 "is_configured": false, 00:34:56.532 "data_offset": 0, 00:34:56.532 "data_size": 0 00:34:56.532 }, 00:34:56.532 { 00:34:56.532 "name": "BaseBdev3", 00:34:56.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.532 "is_configured": false, 00:34:56.532 "data_offset": 0, 00:34:56.532 "data_size": 0 00:34:56.532 }, 00:34:56.532 { 00:34:56.532 "name": "BaseBdev4", 00:34:56.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.532 "is_configured": false, 00:34:56.532 "data_offset": 0, 00:34:56.532 "data_size": 0 00:34:56.532 } 00:34:56.532 ] 00:34:56.532 }' 00:34:56.532 01:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:56.532 01:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.100 01:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:57.359 [2024-07-25 01:01:19.874526] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:57.359 [2024-07-25 01:01:19.874725] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:34:57.359 01:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:57.617 [2024-07-25 01:01:20.058597] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:57.617 [2024-07-25 01:01:20.058776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:57.618 [2024-07-25 01:01:20.058858] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:57.618 [2024-07-25 01:01:20.058972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:57.618 [2024-07-25 01:01:20.059045] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:57.618 [2024-07-25 01:01:20.059106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:57.618 [2024-07-25 01:01:20.059133] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:57.618 [2024-07-25 01:01:20.059173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:57.618 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:57.618 [2024-07-25 01:01:20.260832] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:57.618 BaseBdev1 00:34:57.876 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:34:57.876 01:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:57.876 01:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:57.876 01:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:57.876 01:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:57.876 01:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:57.876 01:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:57.876 01:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:58.135 [ 00:34:58.135 { 00:34:58.135 "name": "BaseBdev1", 00:34:58.135 "aliases": [ 00:34:58.136 "c7eebb7c-6ffe-4e3f-9c22-e868259926ae" 00:34:58.136 ], 00:34:58.136 "product_name": "Malloc disk", 00:34:58.136 "block_size": 512, 00:34:58.136 "num_blocks": 65536, 00:34:58.136 "uuid": "c7eebb7c-6ffe-4e3f-9c22-e868259926ae", 00:34:58.136 "assigned_rate_limits": { 00:34:58.136 "rw_ios_per_sec": 0, 00:34:58.136 "rw_mbytes_per_sec": 0, 00:34:58.136 "r_mbytes_per_sec": 0, 00:34:58.136 "w_mbytes_per_sec": 0 00:34:58.136 }, 00:34:58.136 "claimed": true, 00:34:58.136 "claim_type": "exclusive_write", 00:34:58.136 "zoned": false, 00:34:58.136 "supported_io_types": { 00:34:58.136 "read": true, 00:34:58.136 "write": true, 00:34:58.136 "unmap": true, 00:34:58.136 "flush": true, 00:34:58.136 "reset": true, 00:34:58.136 "nvme_admin": false, 00:34:58.136 "nvme_io": false, 00:34:58.136 "nvme_io_md": false, 00:34:58.136 "write_zeroes": true, 00:34:58.136 "zcopy": true, 00:34:58.136 "get_zone_info": false, 00:34:58.136 "zone_management": false, 00:34:58.136 "zone_append": false, 00:34:58.136 "compare": false, 00:34:58.136 "compare_and_write": false, 00:34:58.136 "abort": true, 00:34:58.136 "seek_hole": false, 00:34:58.136 "seek_data": false, 00:34:58.136 "copy": true, 00:34:58.136 "nvme_iov_md": false 00:34:58.136 }, 00:34:58.136 "memory_domains": [ 00:34:58.136 { 00:34:58.136 "dma_device_id": "system", 00:34:58.136 "dma_device_type": 1 00:34:58.136 }, 00:34:58.136 { 00:34:58.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:58.136 "dma_device_type": 2 00:34:58.136 } 00:34:58.136 ], 00:34:58.136 "driver_specific": {} 00:34:58.136 } 00:34:58.136 ] 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:58.136 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:58.395 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:58.395 "name": "Existed_Raid", 00:34:58.395 "uuid": "ae1af7e1-7673-412b-92f4-b3ac0339c2f5", 00:34:58.395 "strip_size_kb": 64, 00:34:58.395 "state": "configuring", 00:34:58.395 "raid_level": "raid5f", 00:34:58.395 "superblock": true, 00:34:58.395 "num_base_bdevs": 4, 00:34:58.395 "num_base_bdevs_discovered": 1, 00:34:58.395 "num_base_bdevs_operational": 4, 00:34:58.395 "base_bdevs_list": [ 00:34:58.395 { 00:34:58.395 "name": "BaseBdev1", 00:34:58.395 "uuid": "c7eebb7c-6ffe-4e3f-9c22-e868259926ae", 00:34:58.395 "is_configured": true, 00:34:58.395 "data_offset": 2048, 00:34:58.395 "data_size": 63488 00:34:58.395 }, 00:34:58.395 { 00:34:58.395 "name": "BaseBdev2", 00:34:58.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.395 "is_configured": false, 00:34:58.395 "data_offset": 0, 00:34:58.395 "data_size": 0 00:34:58.395 }, 00:34:58.395 { 00:34:58.395 "name": "BaseBdev3", 00:34:58.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.395 "is_configured": false, 00:34:58.395 "data_offset": 0, 00:34:58.395 "data_size": 0 00:34:58.395 }, 00:34:58.395 { 00:34:58.395 "name": "BaseBdev4", 00:34:58.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.395 "is_configured": false, 00:34:58.395 "data_offset": 0, 00:34:58.395 "data_size": 0 00:34:58.395 } 00:34:58.395 ] 00:34:58.395 }' 00:34:58.395 01:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:58.395 01:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:58.963 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:58.963 [2024-07-25 01:01:21.609073] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:58.963 [2024-07-25 01:01:21.609299] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:59.222 [2024-07-25 01:01:21.789175] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:59.222 [2024-07-25 01:01:21.791219] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:59.222 [2024-07-25 01:01:21.791388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:59.222 [2024-07-25 01:01:21.791472] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:59.222 [2024-07-25 01:01:21.791529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:59.222 [2024-07-25 01:01:21.791557] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:59.222 [2024-07-25 01:01:21.791597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:59.222 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:59.481 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:59.481 "name": "Existed_Raid", 00:34:59.481 "uuid": "66fa9443-6dfd-44ec-9b8b-8cb4bd95673e", 00:34:59.481 "strip_size_kb": 64, 00:34:59.481 "state": "configuring", 00:34:59.481 "raid_level": "raid5f", 00:34:59.481 "superblock": true, 00:34:59.481 "num_base_bdevs": 4, 00:34:59.481 "num_base_bdevs_discovered": 1, 00:34:59.481 "num_base_bdevs_operational": 4, 00:34:59.481 "base_bdevs_list": [ 00:34:59.481 { 00:34:59.481 "name": "BaseBdev1", 00:34:59.481 "uuid": "c7eebb7c-6ffe-4e3f-9c22-e868259926ae", 00:34:59.481 "is_configured": true, 00:34:59.481 "data_offset": 2048, 00:34:59.481 "data_size": 63488 00:34:59.481 }, 00:34:59.481 { 00:34:59.481 "name": "BaseBdev2", 00:34:59.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.481 "is_configured": false, 00:34:59.481 "data_offset": 0, 00:34:59.481 "data_size": 0 00:34:59.481 }, 00:34:59.481 { 00:34:59.481 "name": "BaseBdev3", 00:34:59.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.481 "is_configured": false, 00:34:59.481 "data_offset": 0, 00:34:59.481 "data_size": 0 00:34:59.481 }, 00:34:59.481 { 00:34:59.481 "name": "BaseBdev4", 00:34:59.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.481 "is_configured": false, 00:34:59.481 "data_offset": 0, 00:34:59.481 "data_size": 0 00:34:59.481 } 00:34:59.481 ] 00:34:59.481 }' 00:34:59.481 01:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:59.481 01:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:00.050 01:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:35:00.309 [2024-07-25 01:01:22.885482] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:00.309 BaseBdev2 00:35:00.309 01:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:35:00.309 01:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:35:00.309 01:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:00.309 01:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:35:00.309 01:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:00.309 01:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:00.309 01:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:00.567 01:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:00.826 [ 00:35:00.826 { 00:35:00.826 "name": "BaseBdev2", 00:35:00.826 "aliases": [ 00:35:00.826 "d3b28c98-302d-42d8-93f2-0c9e51efa85c" 00:35:00.826 ], 00:35:00.826 "product_name": "Malloc disk", 00:35:00.826 "block_size": 512, 00:35:00.826 "num_blocks": 65536, 00:35:00.826 "uuid": "d3b28c98-302d-42d8-93f2-0c9e51efa85c", 00:35:00.826 "assigned_rate_limits": { 00:35:00.826 "rw_ios_per_sec": 0, 00:35:00.826 "rw_mbytes_per_sec": 0, 00:35:00.826 "r_mbytes_per_sec": 0, 00:35:00.826 "w_mbytes_per_sec": 0 00:35:00.826 }, 00:35:00.826 "claimed": true, 00:35:00.826 "claim_type": "exclusive_write", 00:35:00.826 "zoned": false, 00:35:00.826 "supported_io_types": { 00:35:00.826 "read": true, 00:35:00.826 "write": true, 00:35:00.826 "unmap": true, 00:35:00.826 "flush": true, 00:35:00.826 "reset": true, 00:35:00.826 "nvme_admin": false, 00:35:00.826 "nvme_io": false, 00:35:00.826 "nvme_io_md": false, 00:35:00.826 "write_zeroes": true, 00:35:00.826 "zcopy": true, 00:35:00.826 "get_zone_info": false, 00:35:00.826 "zone_management": false, 00:35:00.826 "zone_append": false, 00:35:00.826 "compare": false, 00:35:00.826 "compare_and_write": false, 00:35:00.826 "abort": true, 00:35:00.826 "seek_hole": false, 00:35:00.826 "seek_data": false, 00:35:00.826 "copy": true, 00:35:00.826 "nvme_iov_md": false 00:35:00.826 }, 00:35:00.826 "memory_domains": [ 00:35:00.826 { 00:35:00.826 "dma_device_id": "system", 00:35:00.826 "dma_device_type": 1 00:35:00.826 }, 00:35:00.826 { 00:35:00.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:00.826 "dma_device_type": 2 00:35:00.826 } 00:35:00.826 ], 00:35:00.826 "driver_specific": {} 00:35:00.826 } 00:35:00.826 ] 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:00.826 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:00.826 "name": "Existed_Raid", 00:35:00.826 "uuid": "66fa9443-6dfd-44ec-9b8b-8cb4bd95673e", 00:35:00.826 "strip_size_kb": 64, 00:35:00.826 "state": "configuring", 00:35:00.826 "raid_level": "raid5f", 00:35:00.826 "superblock": true, 00:35:00.826 "num_base_bdevs": 4, 00:35:00.826 "num_base_bdevs_discovered": 2, 00:35:00.826 "num_base_bdevs_operational": 4, 00:35:00.826 "base_bdevs_list": [ 00:35:00.826 { 00:35:00.826 "name": "BaseBdev1", 00:35:00.826 "uuid": "c7eebb7c-6ffe-4e3f-9c22-e868259926ae", 00:35:00.826 "is_configured": true, 00:35:00.826 "data_offset": 2048, 00:35:00.826 "data_size": 63488 00:35:00.826 }, 00:35:00.826 { 00:35:00.826 "name": "BaseBdev2", 00:35:00.826 "uuid": "d3b28c98-302d-42d8-93f2-0c9e51efa85c", 00:35:00.826 "is_configured": true, 00:35:00.826 "data_offset": 2048, 00:35:00.826 "data_size": 63488 00:35:00.826 }, 00:35:00.826 { 00:35:00.826 "name": "BaseBdev3", 00:35:00.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:00.827 "is_configured": false, 00:35:00.827 "data_offset": 0, 00:35:00.827 "data_size": 0 00:35:00.827 }, 00:35:00.827 { 00:35:00.827 "name": "BaseBdev4", 00:35:00.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:00.827 "is_configured": false, 00:35:00.827 "data_offset": 0, 00:35:00.827 "data_size": 0 00:35:00.827 } 00:35:00.827 ] 00:35:00.827 }' 00:35:00.827 01:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:00.827 01:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:01.393 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:35:01.652 [2024-07-25 01:01:24.297066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:01.652 BaseBdev3 00:35:01.911 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:35:01.911 01:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:35:01.911 01:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:01.911 01:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:35:01.911 01:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:01.911 01:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:01.911 01:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:01.911 01:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:02.170 [ 00:35:02.170 { 00:35:02.170 "name": "BaseBdev3", 00:35:02.170 "aliases": [ 00:35:02.170 "984bbdf1-ede6-4194-8cf7-613d98181762" 00:35:02.170 ], 00:35:02.170 "product_name": "Malloc disk", 00:35:02.170 "block_size": 512, 00:35:02.170 "num_blocks": 65536, 00:35:02.170 "uuid": "984bbdf1-ede6-4194-8cf7-613d98181762", 00:35:02.170 "assigned_rate_limits": { 00:35:02.170 "rw_ios_per_sec": 0, 00:35:02.170 "rw_mbytes_per_sec": 0, 00:35:02.170 "r_mbytes_per_sec": 0, 00:35:02.170 "w_mbytes_per_sec": 0 00:35:02.170 }, 00:35:02.170 "claimed": true, 00:35:02.170 "claim_type": "exclusive_write", 00:35:02.170 "zoned": false, 00:35:02.170 "supported_io_types": { 00:35:02.170 "read": true, 00:35:02.170 "write": true, 00:35:02.170 "unmap": true, 00:35:02.170 "flush": true, 00:35:02.170 "reset": true, 00:35:02.170 "nvme_admin": false, 00:35:02.170 "nvme_io": false, 00:35:02.170 "nvme_io_md": false, 00:35:02.170 "write_zeroes": true, 00:35:02.170 "zcopy": true, 00:35:02.170 "get_zone_info": false, 00:35:02.170 "zone_management": false, 00:35:02.170 "zone_append": false, 00:35:02.170 "compare": false, 00:35:02.170 "compare_and_write": false, 00:35:02.170 "abort": true, 00:35:02.170 "seek_hole": false, 00:35:02.170 "seek_data": false, 00:35:02.170 "copy": true, 00:35:02.170 "nvme_iov_md": false 00:35:02.170 }, 00:35:02.170 "memory_domains": [ 00:35:02.170 { 00:35:02.170 "dma_device_id": "system", 00:35:02.170 "dma_device_type": 1 00:35:02.170 }, 00:35:02.170 { 00:35:02.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:02.170 "dma_device_type": 2 00:35:02.170 } 00:35:02.170 ], 00:35:02.170 "driver_specific": {} 00:35:02.170 } 00:35:02.170 ] 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:02.170 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:02.428 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:02.428 "name": "Existed_Raid", 00:35:02.428 "uuid": "66fa9443-6dfd-44ec-9b8b-8cb4bd95673e", 00:35:02.428 "strip_size_kb": 64, 00:35:02.428 "state": "configuring", 00:35:02.428 "raid_level": "raid5f", 00:35:02.428 "superblock": true, 00:35:02.428 "num_base_bdevs": 4, 00:35:02.428 "num_base_bdevs_discovered": 3, 00:35:02.428 "num_base_bdevs_operational": 4, 00:35:02.428 "base_bdevs_list": [ 00:35:02.428 { 00:35:02.428 "name": "BaseBdev1", 00:35:02.428 "uuid": "c7eebb7c-6ffe-4e3f-9c22-e868259926ae", 00:35:02.428 "is_configured": true, 00:35:02.428 "data_offset": 2048, 00:35:02.428 "data_size": 63488 00:35:02.428 }, 00:35:02.428 { 00:35:02.428 "name": "BaseBdev2", 00:35:02.428 "uuid": "d3b28c98-302d-42d8-93f2-0c9e51efa85c", 00:35:02.428 "is_configured": true, 00:35:02.428 "data_offset": 2048, 00:35:02.428 "data_size": 63488 00:35:02.428 }, 00:35:02.428 { 00:35:02.428 "name": "BaseBdev3", 00:35:02.428 "uuid": "984bbdf1-ede6-4194-8cf7-613d98181762", 00:35:02.428 "is_configured": true, 00:35:02.428 "data_offset": 2048, 00:35:02.428 "data_size": 63488 00:35:02.428 }, 00:35:02.428 { 00:35:02.428 "name": "BaseBdev4", 00:35:02.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.428 "is_configured": false, 00:35:02.428 "data_offset": 0, 00:35:02.428 "data_size": 0 00:35:02.428 } 00:35:02.428 ] 00:35:02.428 }' 00:35:02.428 01:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:02.428 01:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:03.016 01:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:35:03.016 [2024-07-25 01:01:25.621935] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:03.016 [2024-07-25 01:01:25.622450] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:35:03.016 [2024-07-25 01:01:25.622577] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:03.016 [2024-07-25 01:01:25.622737] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:35:03.016 BaseBdev4 00:35:03.016 [2024-07-25 01:01:25.630459] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:35:03.016 [2024-07-25 01:01:25.630578] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:35:03.016 [2024-07-25 01:01:25.630883] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:03.016 01:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:35:03.016 01:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:35:03.016 01:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:03.016 01:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:35:03.016 01:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:03.016 01:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:03.016 01:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:03.293 01:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:03.552 [ 00:35:03.552 { 00:35:03.552 "name": "BaseBdev4", 00:35:03.552 "aliases": [ 00:35:03.552 "4745acd7-5b20-4dc2-97c7-2117aee2cea9" 00:35:03.552 ], 00:35:03.552 "product_name": "Malloc disk", 00:35:03.552 "block_size": 512, 00:35:03.552 "num_blocks": 65536, 00:35:03.552 "uuid": "4745acd7-5b20-4dc2-97c7-2117aee2cea9", 00:35:03.552 "assigned_rate_limits": { 00:35:03.552 "rw_ios_per_sec": 0, 00:35:03.552 "rw_mbytes_per_sec": 0, 00:35:03.552 "r_mbytes_per_sec": 0, 00:35:03.552 "w_mbytes_per_sec": 0 00:35:03.552 }, 00:35:03.552 "claimed": true, 00:35:03.552 "claim_type": "exclusive_write", 00:35:03.552 "zoned": false, 00:35:03.552 "supported_io_types": { 00:35:03.552 "read": true, 00:35:03.552 "write": true, 00:35:03.552 "unmap": true, 00:35:03.552 "flush": true, 00:35:03.552 "reset": true, 00:35:03.552 "nvme_admin": false, 00:35:03.552 "nvme_io": false, 00:35:03.552 "nvme_io_md": false, 00:35:03.552 "write_zeroes": true, 00:35:03.552 "zcopy": true, 00:35:03.552 "get_zone_info": false, 00:35:03.552 "zone_management": false, 00:35:03.552 "zone_append": false, 00:35:03.552 "compare": false, 00:35:03.552 "compare_and_write": false, 00:35:03.552 "abort": true, 00:35:03.552 "seek_hole": false, 00:35:03.552 "seek_data": false, 00:35:03.552 "copy": true, 00:35:03.552 "nvme_iov_md": false 00:35:03.552 }, 00:35:03.552 "memory_domains": [ 00:35:03.552 { 00:35:03.552 "dma_device_id": "system", 00:35:03.552 "dma_device_type": 1 00:35:03.552 }, 00:35:03.552 { 00:35:03.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:03.552 "dma_device_type": 2 00:35:03.552 } 00:35:03.552 ], 00:35:03.552 "driver_specific": {} 00:35:03.552 } 00:35:03.552 ] 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:03.552 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:03.811 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:03.811 "name": "Existed_Raid", 00:35:03.811 "uuid": "66fa9443-6dfd-44ec-9b8b-8cb4bd95673e", 00:35:03.811 "strip_size_kb": 64, 00:35:03.811 "state": "online", 00:35:03.811 "raid_level": "raid5f", 00:35:03.811 "superblock": true, 00:35:03.811 "num_base_bdevs": 4, 00:35:03.811 "num_base_bdevs_discovered": 4, 00:35:03.811 "num_base_bdevs_operational": 4, 00:35:03.811 "base_bdevs_list": [ 00:35:03.811 { 00:35:03.811 "name": "BaseBdev1", 00:35:03.811 "uuid": "c7eebb7c-6ffe-4e3f-9c22-e868259926ae", 00:35:03.811 "is_configured": true, 00:35:03.811 "data_offset": 2048, 00:35:03.811 "data_size": 63488 00:35:03.811 }, 00:35:03.811 { 00:35:03.811 "name": "BaseBdev2", 00:35:03.811 "uuid": "d3b28c98-302d-42d8-93f2-0c9e51efa85c", 00:35:03.811 "is_configured": true, 00:35:03.811 "data_offset": 2048, 00:35:03.811 "data_size": 63488 00:35:03.811 }, 00:35:03.811 { 00:35:03.811 "name": "BaseBdev3", 00:35:03.811 "uuid": "984bbdf1-ede6-4194-8cf7-613d98181762", 00:35:03.811 "is_configured": true, 00:35:03.811 "data_offset": 2048, 00:35:03.811 "data_size": 63488 00:35:03.811 }, 00:35:03.811 { 00:35:03.811 "name": "BaseBdev4", 00:35:03.811 "uuid": "4745acd7-5b20-4dc2-97c7-2117aee2cea9", 00:35:03.811 "is_configured": true, 00:35:03.811 "data_offset": 2048, 00:35:03.811 "data_size": 63488 00:35:03.811 } 00:35:03.811 ] 00:35:03.811 }' 00:35:03.811 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:03.811 01:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.378 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:35:04.378 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:35:04.378 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:04.378 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:04.378 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:04.378 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:35:04.378 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:04.378 01:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:35:04.637 [2024-07-25 01:01:27.112708] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:04.637 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:04.637 "name": "Existed_Raid", 00:35:04.637 "aliases": [ 00:35:04.637 "66fa9443-6dfd-44ec-9b8b-8cb4bd95673e" 00:35:04.637 ], 00:35:04.637 "product_name": "Raid Volume", 00:35:04.637 "block_size": 512, 00:35:04.637 "num_blocks": 190464, 00:35:04.637 "uuid": "66fa9443-6dfd-44ec-9b8b-8cb4bd95673e", 00:35:04.637 "assigned_rate_limits": { 00:35:04.637 "rw_ios_per_sec": 0, 00:35:04.637 "rw_mbytes_per_sec": 0, 00:35:04.637 "r_mbytes_per_sec": 0, 00:35:04.637 "w_mbytes_per_sec": 0 00:35:04.637 }, 00:35:04.637 "claimed": false, 00:35:04.637 "zoned": false, 00:35:04.637 "supported_io_types": { 00:35:04.637 "read": true, 00:35:04.637 "write": true, 00:35:04.637 "unmap": false, 00:35:04.637 "flush": false, 00:35:04.637 "reset": true, 00:35:04.637 "nvme_admin": false, 00:35:04.637 "nvme_io": false, 00:35:04.637 "nvme_io_md": false, 00:35:04.637 "write_zeroes": true, 00:35:04.637 "zcopy": false, 00:35:04.637 "get_zone_info": false, 00:35:04.637 "zone_management": false, 00:35:04.637 "zone_append": false, 00:35:04.637 "compare": false, 00:35:04.637 "compare_and_write": false, 00:35:04.637 "abort": false, 00:35:04.637 "seek_hole": false, 00:35:04.637 "seek_data": false, 00:35:04.637 "copy": false, 00:35:04.637 "nvme_iov_md": false 00:35:04.637 }, 00:35:04.637 "driver_specific": { 00:35:04.637 "raid": { 00:35:04.637 "uuid": "66fa9443-6dfd-44ec-9b8b-8cb4bd95673e", 00:35:04.637 "strip_size_kb": 64, 00:35:04.637 "state": "online", 00:35:04.637 "raid_level": "raid5f", 00:35:04.637 "superblock": true, 00:35:04.637 "num_base_bdevs": 4, 00:35:04.637 "num_base_bdevs_discovered": 4, 00:35:04.637 "num_base_bdevs_operational": 4, 00:35:04.637 "base_bdevs_list": [ 00:35:04.637 { 00:35:04.637 "name": "BaseBdev1", 00:35:04.637 "uuid": "c7eebb7c-6ffe-4e3f-9c22-e868259926ae", 00:35:04.637 "is_configured": true, 00:35:04.637 "data_offset": 2048, 00:35:04.637 "data_size": 63488 00:35:04.637 }, 00:35:04.637 { 00:35:04.637 "name": "BaseBdev2", 00:35:04.637 "uuid": "d3b28c98-302d-42d8-93f2-0c9e51efa85c", 00:35:04.637 "is_configured": true, 00:35:04.637 "data_offset": 2048, 00:35:04.637 "data_size": 63488 00:35:04.637 }, 00:35:04.637 { 00:35:04.637 "name": "BaseBdev3", 00:35:04.637 "uuid": "984bbdf1-ede6-4194-8cf7-613d98181762", 00:35:04.637 "is_configured": true, 00:35:04.637 "data_offset": 2048, 00:35:04.637 "data_size": 63488 00:35:04.637 }, 00:35:04.637 { 00:35:04.637 "name": "BaseBdev4", 00:35:04.637 "uuid": "4745acd7-5b20-4dc2-97c7-2117aee2cea9", 00:35:04.637 "is_configured": true, 00:35:04.637 "data_offset": 2048, 00:35:04.637 "data_size": 63488 00:35:04.637 } 00:35:04.637 ] 00:35:04.637 } 00:35:04.637 } 00:35:04.637 }' 00:35:04.637 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:04.637 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:35:04.637 BaseBdev2 00:35:04.637 BaseBdev3 00:35:04.637 BaseBdev4' 00:35:04.638 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:04.638 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:35:04.638 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:04.896 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:04.896 "name": "BaseBdev1", 00:35:04.896 "aliases": [ 00:35:04.896 "c7eebb7c-6ffe-4e3f-9c22-e868259926ae" 00:35:04.896 ], 00:35:04.896 "product_name": "Malloc disk", 00:35:04.896 "block_size": 512, 00:35:04.896 "num_blocks": 65536, 00:35:04.896 "uuid": "c7eebb7c-6ffe-4e3f-9c22-e868259926ae", 00:35:04.896 "assigned_rate_limits": { 00:35:04.896 "rw_ios_per_sec": 0, 00:35:04.896 "rw_mbytes_per_sec": 0, 00:35:04.896 "r_mbytes_per_sec": 0, 00:35:04.897 "w_mbytes_per_sec": 0 00:35:04.897 }, 00:35:04.897 "claimed": true, 00:35:04.897 "claim_type": "exclusive_write", 00:35:04.897 "zoned": false, 00:35:04.897 "supported_io_types": { 00:35:04.897 "read": true, 00:35:04.897 "write": true, 00:35:04.897 "unmap": true, 00:35:04.897 "flush": true, 00:35:04.897 "reset": true, 00:35:04.897 "nvme_admin": false, 00:35:04.897 "nvme_io": false, 00:35:04.897 "nvme_io_md": false, 00:35:04.897 "write_zeroes": true, 00:35:04.897 "zcopy": true, 00:35:04.897 "get_zone_info": false, 00:35:04.897 "zone_management": false, 00:35:04.897 "zone_append": false, 00:35:04.897 "compare": false, 00:35:04.897 "compare_and_write": false, 00:35:04.897 "abort": true, 00:35:04.897 "seek_hole": false, 00:35:04.897 "seek_data": false, 00:35:04.897 "copy": true, 00:35:04.897 "nvme_iov_md": false 00:35:04.897 }, 00:35:04.897 "memory_domains": [ 00:35:04.897 { 00:35:04.897 "dma_device_id": "system", 00:35:04.897 "dma_device_type": 1 00:35:04.897 }, 00:35:04.897 { 00:35:04.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:04.897 "dma_device_type": 2 00:35:04.897 } 00:35:04.897 ], 00:35:04.897 "driver_specific": {} 00:35:04.897 }' 00:35:04.897 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:04.897 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:04.897 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:04.897 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:04.897 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:05.156 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:05.156 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:05.156 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:05.156 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:05.156 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:05.156 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:05.156 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:05.156 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:05.156 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:05.156 01:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:35:05.415 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:05.415 "name": "BaseBdev2", 00:35:05.415 "aliases": [ 00:35:05.415 "d3b28c98-302d-42d8-93f2-0c9e51efa85c" 00:35:05.415 ], 00:35:05.415 "product_name": "Malloc disk", 00:35:05.415 "block_size": 512, 00:35:05.415 "num_blocks": 65536, 00:35:05.415 "uuid": "d3b28c98-302d-42d8-93f2-0c9e51efa85c", 00:35:05.415 "assigned_rate_limits": { 00:35:05.415 "rw_ios_per_sec": 0, 00:35:05.415 "rw_mbytes_per_sec": 0, 00:35:05.415 "r_mbytes_per_sec": 0, 00:35:05.415 "w_mbytes_per_sec": 0 00:35:05.415 }, 00:35:05.415 "claimed": true, 00:35:05.415 "claim_type": "exclusive_write", 00:35:05.415 "zoned": false, 00:35:05.415 "supported_io_types": { 00:35:05.415 "read": true, 00:35:05.415 "write": true, 00:35:05.415 "unmap": true, 00:35:05.415 "flush": true, 00:35:05.415 "reset": true, 00:35:05.415 "nvme_admin": false, 00:35:05.415 "nvme_io": false, 00:35:05.415 "nvme_io_md": false, 00:35:05.415 "write_zeroes": true, 00:35:05.415 "zcopy": true, 00:35:05.415 "get_zone_info": false, 00:35:05.415 "zone_management": false, 00:35:05.415 "zone_append": false, 00:35:05.415 "compare": false, 00:35:05.415 "compare_and_write": false, 00:35:05.415 "abort": true, 00:35:05.415 "seek_hole": false, 00:35:05.415 "seek_data": false, 00:35:05.415 "copy": true, 00:35:05.415 "nvme_iov_md": false 00:35:05.416 }, 00:35:05.416 "memory_domains": [ 00:35:05.416 { 00:35:05.416 "dma_device_id": "system", 00:35:05.416 "dma_device_type": 1 00:35:05.416 }, 00:35:05.416 { 00:35:05.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:05.416 "dma_device_type": 2 00:35:05.416 } 00:35:05.416 ], 00:35:05.416 "driver_specific": {} 00:35:05.416 }' 00:35:05.416 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:05.416 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:05.674 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:05.674 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:05.674 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:05.674 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:05.674 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:05.674 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:05.674 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:05.674 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:05.674 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:05.674 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:05.933 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:05.933 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:35:05.933 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:05.933 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:05.933 "name": "BaseBdev3", 00:35:05.933 "aliases": [ 00:35:05.933 "984bbdf1-ede6-4194-8cf7-613d98181762" 00:35:05.933 ], 00:35:05.933 "product_name": "Malloc disk", 00:35:05.933 "block_size": 512, 00:35:05.933 "num_blocks": 65536, 00:35:05.933 "uuid": "984bbdf1-ede6-4194-8cf7-613d98181762", 00:35:05.933 "assigned_rate_limits": { 00:35:05.933 "rw_ios_per_sec": 0, 00:35:05.933 "rw_mbytes_per_sec": 0, 00:35:05.933 "r_mbytes_per_sec": 0, 00:35:05.933 "w_mbytes_per_sec": 0 00:35:05.933 }, 00:35:05.933 "claimed": true, 00:35:05.933 "claim_type": "exclusive_write", 00:35:05.933 "zoned": false, 00:35:05.933 "supported_io_types": { 00:35:05.933 "read": true, 00:35:05.933 "write": true, 00:35:05.933 "unmap": true, 00:35:05.933 "flush": true, 00:35:05.933 "reset": true, 00:35:05.933 "nvme_admin": false, 00:35:05.933 "nvme_io": false, 00:35:05.933 "nvme_io_md": false, 00:35:05.933 "write_zeroes": true, 00:35:05.933 "zcopy": true, 00:35:05.933 "get_zone_info": false, 00:35:05.933 "zone_management": false, 00:35:05.933 "zone_append": false, 00:35:05.933 "compare": false, 00:35:05.933 "compare_and_write": false, 00:35:05.933 "abort": true, 00:35:05.933 "seek_hole": false, 00:35:05.933 "seek_data": false, 00:35:05.933 "copy": true, 00:35:05.933 "nvme_iov_md": false 00:35:05.933 }, 00:35:05.933 "memory_domains": [ 00:35:05.933 { 00:35:05.933 "dma_device_id": "system", 00:35:05.933 "dma_device_type": 1 00:35:05.933 }, 00:35:05.933 { 00:35:05.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:05.933 "dma_device_type": 2 00:35:05.933 } 00:35:05.933 ], 00:35:05.933 "driver_specific": {} 00:35:05.933 }' 00:35:05.933 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:05.933 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:05.933 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:05.933 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:06.192 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:06.192 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:06.192 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:06.192 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:06.192 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:06.192 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:06.192 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:06.192 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:06.192 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:06.192 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:35:06.192 01:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:06.451 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:06.451 "name": "BaseBdev4", 00:35:06.451 "aliases": [ 00:35:06.451 "4745acd7-5b20-4dc2-97c7-2117aee2cea9" 00:35:06.451 ], 00:35:06.451 "product_name": "Malloc disk", 00:35:06.451 "block_size": 512, 00:35:06.451 "num_blocks": 65536, 00:35:06.451 "uuid": "4745acd7-5b20-4dc2-97c7-2117aee2cea9", 00:35:06.451 "assigned_rate_limits": { 00:35:06.451 "rw_ios_per_sec": 0, 00:35:06.451 "rw_mbytes_per_sec": 0, 00:35:06.451 "r_mbytes_per_sec": 0, 00:35:06.451 "w_mbytes_per_sec": 0 00:35:06.451 }, 00:35:06.451 "claimed": true, 00:35:06.451 "claim_type": "exclusive_write", 00:35:06.451 "zoned": false, 00:35:06.451 "supported_io_types": { 00:35:06.451 "read": true, 00:35:06.451 "write": true, 00:35:06.451 "unmap": true, 00:35:06.451 "flush": true, 00:35:06.451 "reset": true, 00:35:06.451 "nvme_admin": false, 00:35:06.451 "nvme_io": false, 00:35:06.451 "nvme_io_md": false, 00:35:06.451 "write_zeroes": true, 00:35:06.451 "zcopy": true, 00:35:06.451 "get_zone_info": false, 00:35:06.451 "zone_management": false, 00:35:06.451 "zone_append": false, 00:35:06.451 "compare": false, 00:35:06.451 "compare_and_write": false, 00:35:06.451 "abort": true, 00:35:06.451 "seek_hole": false, 00:35:06.451 "seek_data": false, 00:35:06.451 "copy": true, 00:35:06.451 "nvme_iov_md": false 00:35:06.451 }, 00:35:06.451 "memory_domains": [ 00:35:06.451 { 00:35:06.451 "dma_device_id": "system", 00:35:06.451 "dma_device_type": 1 00:35:06.451 }, 00:35:06.451 { 00:35:06.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.451 "dma_device_type": 2 00:35:06.451 } 00:35:06.451 ], 00:35:06.451 "driver_specific": {} 00:35:06.451 }' 00:35:06.451 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:06.710 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:06.710 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:06.710 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:06.710 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:06.710 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:06.710 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:06.710 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:06.710 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:06.710 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:06.969 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:06.969 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:06.969 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:07.228 [2024-07-25 01:01:29.649110] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:07.228 01:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:07.487 01:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:07.487 "name": "Existed_Raid", 00:35:07.487 "uuid": "66fa9443-6dfd-44ec-9b8b-8cb4bd95673e", 00:35:07.487 "strip_size_kb": 64, 00:35:07.487 "state": "online", 00:35:07.487 "raid_level": "raid5f", 00:35:07.487 "superblock": true, 00:35:07.487 "num_base_bdevs": 4, 00:35:07.487 "num_base_bdevs_discovered": 3, 00:35:07.487 "num_base_bdevs_operational": 3, 00:35:07.487 "base_bdevs_list": [ 00:35:07.487 { 00:35:07.487 "name": null, 00:35:07.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.487 "is_configured": false, 00:35:07.487 "data_offset": 2048, 00:35:07.487 "data_size": 63488 00:35:07.487 }, 00:35:07.487 { 00:35:07.487 "name": "BaseBdev2", 00:35:07.487 "uuid": "d3b28c98-302d-42d8-93f2-0c9e51efa85c", 00:35:07.487 "is_configured": true, 00:35:07.487 "data_offset": 2048, 00:35:07.487 "data_size": 63488 00:35:07.487 }, 00:35:07.487 { 00:35:07.487 "name": "BaseBdev3", 00:35:07.487 "uuid": "984bbdf1-ede6-4194-8cf7-613d98181762", 00:35:07.487 "is_configured": true, 00:35:07.487 "data_offset": 2048, 00:35:07.487 "data_size": 63488 00:35:07.487 }, 00:35:07.487 { 00:35:07.487 "name": "BaseBdev4", 00:35:07.487 "uuid": "4745acd7-5b20-4dc2-97c7-2117aee2cea9", 00:35:07.487 "is_configured": true, 00:35:07.487 "data_offset": 2048, 00:35:07.487 "data_size": 63488 00:35:07.487 } 00:35:07.487 ] 00:35:07.487 }' 00:35:07.487 01:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:07.488 01:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.054 01:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:35:08.054 01:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:08.054 01:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:08.054 01:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:35:08.313 01:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:35:08.313 01:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:08.313 01:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:08.572 [2024-07-25 01:01:31.014732] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:08.572 [2024-07-25 01:01:31.015025] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:08.572 [2024-07-25 01:01:31.112657] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:08.572 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:35:08.572 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:08.572 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:08.572 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:35:08.830 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:35:08.830 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:08.830 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:35:09.089 [2024-07-25 01:01:31.536814] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:09.089 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:35:09.089 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:09.089 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:09.089 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:35:09.348 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:35:09.348 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:09.348 01:01:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:35:09.607 [2024-07-25 01:01:32.127309] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:09.607 [2024-07-25 01:01:32.127501] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:35:09.607 01:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:35:09.607 01:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:09.607 01:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:09.607 01:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:35:09.867 01:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:35:09.867 01:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:35:09.867 01:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:35:09.867 01:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:35:09.867 01:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:35:09.867 01:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:35:10.126 BaseBdev2 00:35:10.126 01:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:35:10.126 01:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:35:10.126 01:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:10.126 01:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:35:10.126 01:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:10.126 01:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:10.126 01:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:10.385 01:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:10.644 [ 00:35:10.644 { 00:35:10.644 "name": "BaseBdev2", 00:35:10.644 "aliases": [ 00:35:10.644 "1739667b-5e9c-46cb-b61a-80c67c7b2d42" 00:35:10.644 ], 00:35:10.644 "product_name": "Malloc disk", 00:35:10.644 "block_size": 512, 00:35:10.644 "num_blocks": 65536, 00:35:10.644 "uuid": "1739667b-5e9c-46cb-b61a-80c67c7b2d42", 00:35:10.644 "assigned_rate_limits": { 00:35:10.644 "rw_ios_per_sec": 0, 00:35:10.644 "rw_mbytes_per_sec": 0, 00:35:10.644 "r_mbytes_per_sec": 0, 00:35:10.644 "w_mbytes_per_sec": 0 00:35:10.644 }, 00:35:10.644 "claimed": false, 00:35:10.644 "zoned": false, 00:35:10.644 "supported_io_types": { 00:35:10.644 "read": true, 00:35:10.644 "write": true, 00:35:10.644 "unmap": true, 00:35:10.644 "flush": true, 00:35:10.645 "reset": true, 00:35:10.645 "nvme_admin": false, 00:35:10.645 "nvme_io": false, 00:35:10.645 "nvme_io_md": false, 00:35:10.645 "write_zeroes": true, 00:35:10.645 "zcopy": true, 00:35:10.645 "get_zone_info": false, 00:35:10.645 "zone_management": false, 00:35:10.645 "zone_append": false, 00:35:10.645 "compare": false, 00:35:10.645 "compare_and_write": false, 00:35:10.645 "abort": true, 00:35:10.645 "seek_hole": false, 00:35:10.645 "seek_data": false, 00:35:10.645 "copy": true, 00:35:10.645 "nvme_iov_md": false 00:35:10.645 }, 00:35:10.645 "memory_domains": [ 00:35:10.645 { 00:35:10.645 "dma_device_id": "system", 00:35:10.645 "dma_device_type": 1 00:35:10.645 }, 00:35:10.645 { 00:35:10.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:10.645 "dma_device_type": 2 00:35:10.645 } 00:35:10.645 ], 00:35:10.645 "driver_specific": {} 00:35:10.645 } 00:35:10.645 ] 00:35:10.645 01:01:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:35:10.645 01:01:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:35:10.645 01:01:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:35:10.645 01:01:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:35:10.903 BaseBdev3 00:35:10.903 01:01:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:35:10.903 01:01:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:35:10.903 01:01:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:10.903 01:01:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:35:10.903 01:01:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:10.903 01:01:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:10.903 01:01:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:11.162 01:01:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:11.162 [ 00:35:11.162 { 00:35:11.163 "name": "BaseBdev3", 00:35:11.163 "aliases": [ 00:35:11.163 "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4" 00:35:11.163 ], 00:35:11.163 "product_name": "Malloc disk", 00:35:11.163 "block_size": 512, 00:35:11.163 "num_blocks": 65536, 00:35:11.163 "uuid": "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4", 00:35:11.163 "assigned_rate_limits": { 00:35:11.163 "rw_ios_per_sec": 0, 00:35:11.163 "rw_mbytes_per_sec": 0, 00:35:11.163 "r_mbytes_per_sec": 0, 00:35:11.163 "w_mbytes_per_sec": 0 00:35:11.163 }, 00:35:11.163 "claimed": false, 00:35:11.163 "zoned": false, 00:35:11.163 "supported_io_types": { 00:35:11.163 "read": true, 00:35:11.163 "write": true, 00:35:11.163 "unmap": true, 00:35:11.163 "flush": true, 00:35:11.163 "reset": true, 00:35:11.163 "nvme_admin": false, 00:35:11.163 "nvme_io": false, 00:35:11.163 "nvme_io_md": false, 00:35:11.163 "write_zeroes": true, 00:35:11.163 "zcopy": true, 00:35:11.163 "get_zone_info": false, 00:35:11.163 "zone_management": false, 00:35:11.163 "zone_append": false, 00:35:11.163 "compare": false, 00:35:11.163 "compare_and_write": false, 00:35:11.163 "abort": true, 00:35:11.163 "seek_hole": false, 00:35:11.163 "seek_data": false, 00:35:11.163 "copy": true, 00:35:11.163 "nvme_iov_md": false 00:35:11.163 }, 00:35:11.163 "memory_domains": [ 00:35:11.163 { 00:35:11.163 "dma_device_id": "system", 00:35:11.163 "dma_device_type": 1 00:35:11.163 }, 00:35:11.163 { 00:35:11.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.163 "dma_device_type": 2 00:35:11.163 } 00:35:11.163 ], 00:35:11.163 "driver_specific": {} 00:35:11.163 } 00:35:11.163 ] 00:35:11.163 01:01:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:35:11.163 01:01:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:35:11.163 01:01:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:35:11.163 01:01:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:35:11.449 BaseBdev4 00:35:11.449 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:35:11.449 01:01:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:35:11.449 01:01:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:11.449 01:01:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:35:11.449 01:01:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:11.449 01:01:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:11.449 01:01:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:11.708 01:01:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:11.968 [ 00:35:11.968 { 00:35:11.968 "name": "BaseBdev4", 00:35:11.968 "aliases": [ 00:35:11.968 "26bc6e16-4c9a-42c9-9118-2276d45614be" 00:35:11.968 ], 00:35:11.968 "product_name": "Malloc disk", 00:35:11.968 "block_size": 512, 00:35:11.968 "num_blocks": 65536, 00:35:11.968 "uuid": "26bc6e16-4c9a-42c9-9118-2276d45614be", 00:35:11.968 "assigned_rate_limits": { 00:35:11.968 "rw_ios_per_sec": 0, 00:35:11.968 "rw_mbytes_per_sec": 0, 00:35:11.968 "r_mbytes_per_sec": 0, 00:35:11.968 "w_mbytes_per_sec": 0 00:35:11.968 }, 00:35:11.968 "claimed": false, 00:35:11.968 "zoned": false, 00:35:11.968 "supported_io_types": { 00:35:11.968 "read": true, 00:35:11.968 "write": true, 00:35:11.968 "unmap": true, 00:35:11.968 "flush": true, 00:35:11.968 "reset": true, 00:35:11.968 "nvme_admin": false, 00:35:11.968 "nvme_io": false, 00:35:11.968 "nvme_io_md": false, 00:35:11.968 "write_zeroes": true, 00:35:11.968 "zcopy": true, 00:35:11.968 "get_zone_info": false, 00:35:11.968 "zone_management": false, 00:35:11.968 "zone_append": false, 00:35:11.968 "compare": false, 00:35:11.968 "compare_and_write": false, 00:35:11.968 "abort": true, 00:35:11.968 "seek_hole": false, 00:35:11.968 "seek_data": false, 00:35:11.968 "copy": true, 00:35:11.968 "nvme_iov_md": false 00:35:11.968 }, 00:35:11.968 "memory_domains": [ 00:35:11.968 { 00:35:11.968 "dma_device_id": "system", 00:35:11.968 "dma_device_type": 1 00:35:11.968 }, 00:35:11.968 { 00:35:11.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.968 "dma_device_type": 2 00:35:11.968 } 00:35:11.968 ], 00:35:11.968 "driver_specific": {} 00:35:11.968 } 00:35:11.968 ] 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:35:11.968 [2024-07-25 01:01:34.594779] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:11.968 [2024-07-25 01:01:34.595037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:11.968 [2024-07-25 01:01:34.595131] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:11.968 [2024-07-25 01:01:34.597091] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:11.968 [2024-07-25 01:01:34.597266] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:11.968 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:12.228 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:12.228 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:12.228 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:12.228 "name": "Existed_Raid", 00:35:12.228 "uuid": "372a3e4f-6839-49d6-954f-c74962b76099", 00:35:12.228 "strip_size_kb": 64, 00:35:12.228 "state": "configuring", 00:35:12.228 "raid_level": "raid5f", 00:35:12.228 "superblock": true, 00:35:12.228 "num_base_bdevs": 4, 00:35:12.228 "num_base_bdevs_discovered": 3, 00:35:12.228 "num_base_bdevs_operational": 4, 00:35:12.228 "base_bdevs_list": [ 00:35:12.228 { 00:35:12.228 "name": "BaseBdev1", 00:35:12.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:12.228 "is_configured": false, 00:35:12.228 "data_offset": 0, 00:35:12.228 "data_size": 0 00:35:12.228 }, 00:35:12.228 { 00:35:12.228 "name": "BaseBdev2", 00:35:12.228 "uuid": "1739667b-5e9c-46cb-b61a-80c67c7b2d42", 00:35:12.228 "is_configured": true, 00:35:12.228 "data_offset": 2048, 00:35:12.228 "data_size": 63488 00:35:12.228 }, 00:35:12.228 { 00:35:12.228 "name": "BaseBdev3", 00:35:12.228 "uuid": "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4", 00:35:12.228 "is_configured": true, 00:35:12.228 "data_offset": 2048, 00:35:12.228 "data_size": 63488 00:35:12.228 }, 00:35:12.228 { 00:35:12.228 "name": "BaseBdev4", 00:35:12.228 "uuid": "26bc6e16-4c9a-42c9-9118-2276d45614be", 00:35:12.228 "is_configured": true, 00:35:12.228 "data_offset": 2048, 00:35:12.228 "data_size": 63488 00:35:12.228 } 00:35:12.228 ] 00:35:12.228 }' 00:35:12.228 01:01:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:12.228 01:01:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.796 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:35:13.055 [2024-07-25 01:01:35.666938] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:13.055 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:13.055 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:13.055 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:13.055 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:13.055 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:13.055 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:13.055 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:13.055 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:13.055 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:13.055 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:13.056 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:13.056 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:13.315 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:13.315 "name": "Existed_Raid", 00:35:13.315 "uuid": "372a3e4f-6839-49d6-954f-c74962b76099", 00:35:13.315 "strip_size_kb": 64, 00:35:13.315 "state": "configuring", 00:35:13.315 "raid_level": "raid5f", 00:35:13.315 "superblock": true, 00:35:13.315 "num_base_bdevs": 4, 00:35:13.315 "num_base_bdevs_discovered": 2, 00:35:13.315 "num_base_bdevs_operational": 4, 00:35:13.315 "base_bdevs_list": [ 00:35:13.315 { 00:35:13.315 "name": "BaseBdev1", 00:35:13.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:13.315 "is_configured": false, 00:35:13.315 "data_offset": 0, 00:35:13.315 "data_size": 0 00:35:13.315 }, 00:35:13.315 { 00:35:13.315 "name": null, 00:35:13.315 "uuid": "1739667b-5e9c-46cb-b61a-80c67c7b2d42", 00:35:13.315 "is_configured": false, 00:35:13.315 "data_offset": 2048, 00:35:13.315 "data_size": 63488 00:35:13.315 }, 00:35:13.315 { 00:35:13.315 "name": "BaseBdev3", 00:35:13.315 "uuid": "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4", 00:35:13.315 "is_configured": true, 00:35:13.315 "data_offset": 2048, 00:35:13.315 "data_size": 63488 00:35:13.315 }, 00:35:13.315 { 00:35:13.315 "name": "BaseBdev4", 00:35:13.315 "uuid": "26bc6e16-4c9a-42c9-9118-2276d45614be", 00:35:13.315 "is_configured": true, 00:35:13.315 "data_offset": 2048, 00:35:13.315 "data_size": 63488 00:35:13.315 } 00:35:13.315 ] 00:35:13.315 }' 00:35:13.315 01:01:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:13.315 01:01:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.883 01:01:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:13.883 01:01:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:14.142 01:01:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:35:14.142 01:01:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:14.401 [2024-07-25 01:01:36.915002] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:14.401 BaseBdev1 00:35:14.401 01:01:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:35:14.401 01:01:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:35:14.401 01:01:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:14.401 01:01:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:35:14.401 01:01:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:14.401 01:01:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:14.401 01:01:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:14.660 01:01:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:14.919 [ 00:35:14.920 { 00:35:14.920 "name": "BaseBdev1", 00:35:14.920 "aliases": [ 00:35:14.920 "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0" 00:35:14.920 ], 00:35:14.920 "product_name": "Malloc disk", 00:35:14.920 "block_size": 512, 00:35:14.920 "num_blocks": 65536, 00:35:14.920 "uuid": "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0", 00:35:14.920 "assigned_rate_limits": { 00:35:14.920 "rw_ios_per_sec": 0, 00:35:14.920 "rw_mbytes_per_sec": 0, 00:35:14.920 "r_mbytes_per_sec": 0, 00:35:14.920 "w_mbytes_per_sec": 0 00:35:14.920 }, 00:35:14.920 "claimed": true, 00:35:14.920 "claim_type": "exclusive_write", 00:35:14.920 "zoned": false, 00:35:14.920 "supported_io_types": { 00:35:14.920 "read": true, 00:35:14.920 "write": true, 00:35:14.920 "unmap": true, 00:35:14.920 "flush": true, 00:35:14.920 "reset": true, 00:35:14.920 "nvme_admin": false, 00:35:14.920 "nvme_io": false, 00:35:14.920 "nvme_io_md": false, 00:35:14.920 "write_zeroes": true, 00:35:14.920 "zcopy": true, 00:35:14.920 "get_zone_info": false, 00:35:14.920 "zone_management": false, 00:35:14.920 "zone_append": false, 00:35:14.920 "compare": false, 00:35:14.920 "compare_and_write": false, 00:35:14.920 "abort": true, 00:35:14.920 "seek_hole": false, 00:35:14.920 "seek_data": false, 00:35:14.920 "copy": true, 00:35:14.920 "nvme_iov_md": false 00:35:14.920 }, 00:35:14.920 "memory_domains": [ 00:35:14.920 { 00:35:14.920 "dma_device_id": "system", 00:35:14.920 "dma_device_type": 1 00:35:14.920 }, 00:35:14.920 { 00:35:14.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:14.920 "dma_device_type": 2 00:35:14.920 } 00:35:14.920 ], 00:35:14.920 "driver_specific": {} 00:35:14.920 } 00:35:14.920 ] 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:14.920 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:15.179 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:15.179 "name": "Existed_Raid", 00:35:15.179 "uuid": "372a3e4f-6839-49d6-954f-c74962b76099", 00:35:15.179 "strip_size_kb": 64, 00:35:15.179 "state": "configuring", 00:35:15.179 "raid_level": "raid5f", 00:35:15.179 "superblock": true, 00:35:15.179 "num_base_bdevs": 4, 00:35:15.179 "num_base_bdevs_discovered": 3, 00:35:15.179 "num_base_bdevs_operational": 4, 00:35:15.179 "base_bdevs_list": [ 00:35:15.179 { 00:35:15.179 "name": "BaseBdev1", 00:35:15.179 "uuid": "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0", 00:35:15.179 "is_configured": true, 00:35:15.179 "data_offset": 2048, 00:35:15.179 "data_size": 63488 00:35:15.179 }, 00:35:15.179 { 00:35:15.179 "name": null, 00:35:15.179 "uuid": "1739667b-5e9c-46cb-b61a-80c67c7b2d42", 00:35:15.179 "is_configured": false, 00:35:15.179 "data_offset": 2048, 00:35:15.179 "data_size": 63488 00:35:15.179 }, 00:35:15.179 { 00:35:15.179 "name": "BaseBdev3", 00:35:15.179 "uuid": "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4", 00:35:15.179 "is_configured": true, 00:35:15.179 "data_offset": 2048, 00:35:15.179 "data_size": 63488 00:35:15.179 }, 00:35:15.179 { 00:35:15.179 "name": "BaseBdev4", 00:35:15.179 "uuid": "26bc6e16-4c9a-42c9-9118-2276d45614be", 00:35:15.179 "is_configured": true, 00:35:15.179 "data_offset": 2048, 00:35:15.180 "data_size": 63488 00:35:15.180 } 00:35:15.180 ] 00:35:15.180 }' 00:35:15.180 01:01:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:15.180 01:01:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.748 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:15.748 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:15.748 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:35:15.748 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:35:16.007 [2024-07-25 01:01:38.523312] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:16.007 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:16.266 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:16.266 "name": "Existed_Raid", 00:35:16.266 "uuid": "372a3e4f-6839-49d6-954f-c74962b76099", 00:35:16.266 "strip_size_kb": 64, 00:35:16.266 "state": "configuring", 00:35:16.266 "raid_level": "raid5f", 00:35:16.266 "superblock": true, 00:35:16.266 "num_base_bdevs": 4, 00:35:16.266 "num_base_bdevs_discovered": 2, 00:35:16.266 "num_base_bdevs_operational": 4, 00:35:16.266 "base_bdevs_list": [ 00:35:16.266 { 00:35:16.266 "name": "BaseBdev1", 00:35:16.266 "uuid": "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0", 00:35:16.266 "is_configured": true, 00:35:16.266 "data_offset": 2048, 00:35:16.266 "data_size": 63488 00:35:16.266 }, 00:35:16.266 { 00:35:16.267 "name": null, 00:35:16.267 "uuid": "1739667b-5e9c-46cb-b61a-80c67c7b2d42", 00:35:16.267 "is_configured": false, 00:35:16.267 "data_offset": 2048, 00:35:16.267 "data_size": 63488 00:35:16.267 }, 00:35:16.267 { 00:35:16.267 "name": null, 00:35:16.267 "uuid": "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4", 00:35:16.267 "is_configured": false, 00:35:16.267 "data_offset": 2048, 00:35:16.267 "data_size": 63488 00:35:16.267 }, 00:35:16.267 { 00:35:16.267 "name": "BaseBdev4", 00:35:16.267 "uuid": "26bc6e16-4c9a-42c9-9118-2276d45614be", 00:35:16.267 "is_configured": true, 00:35:16.267 "data_offset": 2048, 00:35:16.267 "data_size": 63488 00:35:16.267 } 00:35:16.267 ] 00:35:16.267 }' 00:35:16.267 01:01:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:16.267 01:01:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.835 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:16.835 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:17.094 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:35:17.094 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:17.353 [2024-07-25 01:01:39.915621] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:17.353 01:01:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:17.612 01:01:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:17.612 "name": "Existed_Raid", 00:35:17.612 "uuid": "372a3e4f-6839-49d6-954f-c74962b76099", 00:35:17.612 "strip_size_kb": 64, 00:35:17.612 "state": "configuring", 00:35:17.612 "raid_level": "raid5f", 00:35:17.612 "superblock": true, 00:35:17.612 "num_base_bdevs": 4, 00:35:17.612 "num_base_bdevs_discovered": 3, 00:35:17.612 "num_base_bdevs_operational": 4, 00:35:17.612 "base_bdevs_list": [ 00:35:17.612 { 00:35:17.612 "name": "BaseBdev1", 00:35:17.612 "uuid": "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0", 00:35:17.612 "is_configured": true, 00:35:17.612 "data_offset": 2048, 00:35:17.612 "data_size": 63488 00:35:17.612 }, 00:35:17.612 { 00:35:17.612 "name": null, 00:35:17.612 "uuid": "1739667b-5e9c-46cb-b61a-80c67c7b2d42", 00:35:17.612 "is_configured": false, 00:35:17.612 "data_offset": 2048, 00:35:17.613 "data_size": 63488 00:35:17.613 }, 00:35:17.613 { 00:35:17.613 "name": "BaseBdev3", 00:35:17.613 "uuid": "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4", 00:35:17.613 "is_configured": true, 00:35:17.613 "data_offset": 2048, 00:35:17.613 "data_size": 63488 00:35:17.613 }, 00:35:17.613 { 00:35:17.613 "name": "BaseBdev4", 00:35:17.613 "uuid": "26bc6e16-4c9a-42c9-9118-2276d45614be", 00:35:17.613 "is_configured": true, 00:35:17.613 "data_offset": 2048, 00:35:17.613 "data_size": 63488 00:35:17.613 } 00:35:17.613 ] 00:35:17.613 }' 00:35:17.613 01:01:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:17.613 01:01:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:18.195 01:01:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:18.195 01:01:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:18.452 01:01:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:35:18.452 01:01:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:18.710 [2024-07-25 01:01:41.107845] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:18.710 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:18.968 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:18.968 "name": "Existed_Raid", 00:35:18.968 "uuid": "372a3e4f-6839-49d6-954f-c74962b76099", 00:35:18.968 "strip_size_kb": 64, 00:35:18.968 "state": "configuring", 00:35:18.968 "raid_level": "raid5f", 00:35:18.968 "superblock": true, 00:35:18.968 "num_base_bdevs": 4, 00:35:18.968 "num_base_bdevs_discovered": 2, 00:35:18.968 "num_base_bdevs_operational": 4, 00:35:18.968 "base_bdevs_list": [ 00:35:18.968 { 00:35:18.968 "name": null, 00:35:18.968 "uuid": "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0", 00:35:18.968 "is_configured": false, 00:35:18.968 "data_offset": 2048, 00:35:18.968 "data_size": 63488 00:35:18.968 }, 00:35:18.968 { 00:35:18.968 "name": null, 00:35:18.968 "uuid": "1739667b-5e9c-46cb-b61a-80c67c7b2d42", 00:35:18.968 "is_configured": false, 00:35:18.968 "data_offset": 2048, 00:35:18.968 "data_size": 63488 00:35:18.968 }, 00:35:18.968 { 00:35:18.968 "name": "BaseBdev3", 00:35:18.968 "uuid": "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4", 00:35:18.968 "is_configured": true, 00:35:18.968 "data_offset": 2048, 00:35:18.968 "data_size": 63488 00:35:18.968 }, 00:35:18.968 { 00:35:18.968 "name": "BaseBdev4", 00:35:18.968 "uuid": "26bc6e16-4c9a-42c9-9118-2276d45614be", 00:35:18.968 "is_configured": true, 00:35:18.968 "data_offset": 2048, 00:35:18.968 "data_size": 63488 00:35:18.968 } 00:35:18.968 ] 00:35:18.968 }' 00:35:18.968 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:18.968 01:01:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:19.534 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.534 01:01:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:19.793 [2024-07-25 01:01:42.380534] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.793 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:20.052 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:20.052 "name": "Existed_Raid", 00:35:20.052 "uuid": "372a3e4f-6839-49d6-954f-c74962b76099", 00:35:20.052 "strip_size_kb": 64, 00:35:20.052 "state": "configuring", 00:35:20.052 "raid_level": "raid5f", 00:35:20.052 "superblock": true, 00:35:20.052 "num_base_bdevs": 4, 00:35:20.052 "num_base_bdevs_discovered": 3, 00:35:20.052 "num_base_bdevs_operational": 4, 00:35:20.052 "base_bdevs_list": [ 00:35:20.052 { 00:35:20.052 "name": null, 00:35:20.052 "uuid": "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0", 00:35:20.052 "is_configured": false, 00:35:20.052 "data_offset": 2048, 00:35:20.052 "data_size": 63488 00:35:20.052 }, 00:35:20.052 { 00:35:20.052 "name": "BaseBdev2", 00:35:20.052 "uuid": "1739667b-5e9c-46cb-b61a-80c67c7b2d42", 00:35:20.052 "is_configured": true, 00:35:20.052 "data_offset": 2048, 00:35:20.052 "data_size": 63488 00:35:20.052 }, 00:35:20.052 { 00:35:20.052 "name": "BaseBdev3", 00:35:20.052 "uuid": "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4", 00:35:20.052 "is_configured": true, 00:35:20.052 "data_offset": 2048, 00:35:20.052 "data_size": 63488 00:35:20.052 }, 00:35:20.052 { 00:35:20.052 "name": "BaseBdev4", 00:35:20.052 "uuid": "26bc6e16-4c9a-42c9-9118-2276d45614be", 00:35:20.052 "is_configured": true, 00:35:20.052 "data_offset": 2048, 00:35:20.052 "data_size": 63488 00:35:20.052 } 00:35:20.052 ] 00:35:20.052 }' 00:35:20.052 01:01:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:20.052 01:01:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:20.621 01:01:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.621 01:01:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:20.880 01:01:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:35:20.880 01:01:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.880 01:01:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:21.139 01:01:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0 00:35:21.398 [2024-07-25 01:01:43.964214] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:21.398 [2024-07-25 01:01:43.964680] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:35:21.398 [2024-07-25 01:01:43.964803] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:21.398 [2024-07-25 01:01:43.964938] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:21.398 NewBaseBdev 00:35:21.398 [2024-07-25 01:01:43.971845] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:35:21.398 [2024-07-25 01:01:43.971990] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:35:21.398 [2024-07-25 01:01:43.972261] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:21.398 01:01:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:35:21.398 01:01:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:35:21.398 01:01:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:21.398 01:01:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:35:21.398 01:01:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:21.398 01:01:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:21.398 01:01:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:21.656 01:01:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:21.915 [ 00:35:21.915 { 00:35:21.915 "name": "NewBaseBdev", 00:35:21.915 "aliases": [ 00:35:21.915 "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0" 00:35:21.915 ], 00:35:21.915 "product_name": "Malloc disk", 00:35:21.915 "block_size": 512, 00:35:21.915 "num_blocks": 65536, 00:35:21.915 "uuid": "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0", 00:35:21.915 "assigned_rate_limits": { 00:35:21.915 "rw_ios_per_sec": 0, 00:35:21.915 "rw_mbytes_per_sec": 0, 00:35:21.915 "r_mbytes_per_sec": 0, 00:35:21.915 "w_mbytes_per_sec": 0 00:35:21.915 }, 00:35:21.915 "claimed": true, 00:35:21.915 "claim_type": "exclusive_write", 00:35:21.915 "zoned": false, 00:35:21.915 "supported_io_types": { 00:35:21.915 "read": true, 00:35:21.915 "write": true, 00:35:21.915 "unmap": true, 00:35:21.915 "flush": true, 00:35:21.915 "reset": true, 00:35:21.915 "nvme_admin": false, 00:35:21.915 "nvme_io": false, 00:35:21.915 "nvme_io_md": false, 00:35:21.915 "write_zeroes": true, 00:35:21.915 "zcopy": true, 00:35:21.915 "get_zone_info": false, 00:35:21.915 "zone_management": false, 00:35:21.915 "zone_append": false, 00:35:21.915 "compare": false, 00:35:21.915 "compare_and_write": false, 00:35:21.915 "abort": true, 00:35:21.915 "seek_hole": false, 00:35:21.915 "seek_data": false, 00:35:21.915 "copy": true, 00:35:21.915 "nvme_iov_md": false 00:35:21.915 }, 00:35:21.915 "memory_domains": [ 00:35:21.915 { 00:35:21.915 "dma_device_id": "system", 00:35:21.915 "dma_device_type": 1 00:35:21.915 }, 00:35:21.915 { 00:35:21.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:21.915 "dma_device_type": 2 00:35:21.915 } 00:35:21.915 ], 00:35:21.915 "driver_specific": {} 00:35:21.915 } 00:35:21.915 ] 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:21.915 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:21.916 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:22.175 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:22.175 "name": "Existed_Raid", 00:35:22.175 "uuid": "372a3e4f-6839-49d6-954f-c74962b76099", 00:35:22.175 "strip_size_kb": 64, 00:35:22.175 "state": "online", 00:35:22.175 "raid_level": "raid5f", 00:35:22.175 "superblock": true, 00:35:22.175 "num_base_bdevs": 4, 00:35:22.175 "num_base_bdevs_discovered": 4, 00:35:22.175 "num_base_bdevs_operational": 4, 00:35:22.175 "base_bdevs_list": [ 00:35:22.175 { 00:35:22.175 "name": "NewBaseBdev", 00:35:22.175 "uuid": "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0", 00:35:22.175 "is_configured": true, 00:35:22.175 "data_offset": 2048, 00:35:22.175 "data_size": 63488 00:35:22.175 }, 00:35:22.175 { 00:35:22.175 "name": "BaseBdev2", 00:35:22.175 "uuid": "1739667b-5e9c-46cb-b61a-80c67c7b2d42", 00:35:22.175 "is_configured": true, 00:35:22.175 "data_offset": 2048, 00:35:22.175 "data_size": 63488 00:35:22.175 }, 00:35:22.175 { 00:35:22.175 "name": "BaseBdev3", 00:35:22.175 "uuid": "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4", 00:35:22.175 "is_configured": true, 00:35:22.175 "data_offset": 2048, 00:35:22.175 "data_size": 63488 00:35:22.175 }, 00:35:22.175 { 00:35:22.175 "name": "BaseBdev4", 00:35:22.175 "uuid": "26bc6e16-4c9a-42c9-9118-2276d45614be", 00:35:22.175 "is_configured": true, 00:35:22.175 "data_offset": 2048, 00:35:22.175 "data_size": 63488 00:35:22.175 } 00:35:22.175 ] 00:35:22.175 }' 00:35:22.175 01:01:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:22.175 01:01:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:22.743 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:35:22.743 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:35:22.743 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:22.743 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:22.743 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:22.743 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:35:22.743 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:35:22.743 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:23.002 [2024-07-25 01:01:45.470086] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:23.002 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:23.002 "name": "Existed_Raid", 00:35:23.002 "aliases": [ 00:35:23.002 "372a3e4f-6839-49d6-954f-c74962b76099" 00:35:23.002 ], 00:35:23.002 "product_name": "Raid Volume", 00:35:23.002 "block_size": 512, 00:35:23.002 "num_blocks": 190464, 00:35:23.002 "uuid": "372a3e4f-6839-49d6-954f-c74962b76099", 00:35:23.002 "assigned_rate_limits": { 00:35:23.002 "rw_ios_per_sec": 0, 00:35:23.002 "rw_mbytes_per_sec": 0, 00:35:23.002 "r_mbytes_per_sec": 0, 00:35:23.002 "w_mbytes_per_sec": 0 00:35:23.002 }, 00:35:23.002 "claimed": false, 00:35:23.002 "zoned": false, 00:35:23.002 "supported_io_types": { 00:35:23.002 "read": true, 00:35:23.002 "write": true, 00:35:23.002 "unmap": false, 00:35:23.002 "flush": false, 00:35:23.002 "reset": true, 00:35:23.002 "nvme_admin": false, 00:35:23.002 "nvme_io": false, 00:35:23.002 "nvme_io_md": false, 00:35:23.002 "write_zeroes": true, 00:35:23.002 "zcopy": false, 00:35:23.002 "get_zone_info": false, 00:35:23.002 "zone_management": false, 00:35:23.002 "zone_append": false, 00:35:23.002 "compare": false, 00:35:23.002 "compare_and_write": false, 00:35:23.002 "abort": false, 00:35:23.002 "seek_hole": false, 00:35:23.002 "seek_data": false, 00:35:23.002 "copy": false, 00:35:23.002 "nvme_iov_md": false 00:35:23.002 }, 00:35:23.002 "driver_specific": { 00:35:23.002 "raid": { 00:35:23.002 "uuid": "372a3e4f-6839-49d6-954f-c74962b76099", 00:35:23.002 "strip_size_kb": 64, 00:35:23.002 "state": "online", 00:35:23.002 "raid_level": "raid5f", 00:35:23.002 "superblock": true, 00:35:23.002 "num_base_bdevs": 4, 00:35:23.002 "num_base_bdevs_discovered": 4, 00:35:23.002 "num_base_bdevs_operational": 4, 00:35:23.002 "base_bdevs_list": [ 00:35:23.002 { 00:35:23.002 "name": "NewBaseBdev", 00:35:23.002 "uuid": "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0", 00:35:23.002 "is_configured": true, 00:35:23.002 "data_offset": 2048, 00:35:23.002 "data_size": 63488 00:35:23.002 }, 00:35:23.002 { 00:35:23.002 "name": "BaseBdev2", 00:35:23.002 "uuid": "1739667b-5e9c-46cb-b61a-80c67c7b2d42", 00:35:23.002 "is_configured": true, 00:35:23.002 "data_offset": 2048, 00:35:23.002 "data_size": 63488 00:35:23.002 }, 00:35:23.002 { 00:35:23.002 "name": "BaseBdev3", 00:35:23.002 "uuid": "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4", 00:35:23.002 "is_configured": true, 00:35:23.002 "data_offset": 2048, 00:35:23.002 "data_size": 63488 00:35:23.002 }, 00:35:23.002 { 00:35:23.002 "name": "BaseBdev4", 00:35:23.002 "uuid": "26bc6e16-4c9a-42c9-9118-2276d45614be", 00:35:23.002 "is_configured": true, 00:35:23.002 "data_offset": 2048, 00:35:23.002 "data_size": 63488 00:35:23.002 } 00:35:23.002 ] 00:35:23.002 } 00:35:23.002 } 00:35:23.002 }' 00:35:23.002 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:23.002 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:35:23.002 BaseBdev2 00:35:23.002 BaseBdev3 00:35:23.002 BaseBdev4' 00:35:23.002 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:23.002 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:35:23.002 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:23.261 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:23.261 "name": "NewBaseBdev", 00:35:23.261 "aliases": [ 00:35:23.261 "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0" 00:35:23.261 ], 00:35:23.261 "product_name": "Malloc disk", 00:35:23.261 "block_size": 512, 00:35:23.261 "num_blocks": 65536, 00:35:23.261 "uuid": "b47cf837-8df4-4f5e-8fb4-4e0f042ba2c0", 00:35:23.261 "assigned_rate_limits": { 00:35:23.261 "rw_ios_per_sec": 0, 00:35:23.261 "rw_mbytes_per_sec": 0, 00:35:23.261 "r_mbytes_per_sec": 0, 00:35:23.261 "w_mbytes_per_sec": 0 00:35:23.261 }, 00:35:23.261 "claimed": true, 00:35:23.261 "claim_type": "exclusive_write", 00:35:23.261 "zoned": false, 00:35:23.261 "supported_io_types": { 00:35:23.261 "read": true, 00:35:23.261 "write": true, 00:35:23.261 "unmap": true, 00:35:23.261 "flush": true, 00:35:23.261 "reset": true, 00:35:23.261 "nvme_admin": false, 00:35:23.261 "nvme_io": false, 00:35:23.261 "nvme_io_md": false, 00:35:23.261 "write_zeroes": true, 00:35:23.261 "zcopy": true, 00:35:23.261 "get_zone_info": false, 00:35:23.261 "zone_management": false, 00:35:23.261 "zone_append": false, 00:35:23.261 "compare": false, 00:35:23.261 "compare_and_write": false, 00:35:23.261 "abort": true, 00:35:23.261 "seek_hole": false, 00:35:23.261 "seek_data": false, 00:35:23.261 "copy": true, 00:35:23.261 "nvme_iov_md": false 00:35:23.261 }, 00:35:23.261 "memory_domains": [ 00:35:23.261 { 00:35:23.261 "dma_device_id": "system", 00:35:23.261 "dma_device_type": 1 00:35:23.261 }, 00:35:23.261 { 00:35:23.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:23.261 "dma_device_type": 2 00:35:23.261 } 00:35:23.261 ], 00:35:23.261 "driver_specific": {} 00:35:23.261 }' 00:35:23.261 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:23.261 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:23.261 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:23.261 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:23.261 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:23.261 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:23.261 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:23.261 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:23.520 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:23.520 01:01:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:23.520 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:23.520 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:23.520 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:23.520 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:35:23.520 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:23.779 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:23.779 "name": "BaseBdev2", 00:35:23.779 "aliases": [ 00:35:23.779 "1739667b-5e9c-46cb-b61a-80c67c7b2d42" 00:35:23.779 ], 00:35:23.779 "product_name": "Malloc disk", 00:35:23.779 "block_size": 512, 00:35:23.779 "num_blocks": 65536, 00:35:23.779 "uuid": "1739667b-5e9c-46cb-b61a-80c67c7b2d42", 00:35:23.779 "assigned_rate_limits": { 00:35:23.779 "rw_ios_per_sec": 0, 00:35:23.779 "rw_mbytes_per_sec": 0, 00:35:23.779 "r_mbytes_per_sec": 0, 00:35:23.779 "w_mbytes_per_sec": 0 00:35:23.779 }, 00:35:23.779 "claimed": true, 00:35:23.780 "claim_type": "exclusive_write", 00:35:23.780 "zoned": false, 00:35:23.780 "supported_io_types": { 00:35:23.780 "read": true, 00:35:23.780 "write": true, 00:35:23.780 "unmap": true, 00:35:23.780 "flush": true, 00:35:23.780 "reset": true, 00:35:23.780 "nvme_admin": false, 00:35:23.780 "nvme_io": false, 00:35:23.780 "nvme_io_md": false, 00:35:23.780 "write_zeroes": true, 00:35:23.780 "zcopy": true, 00:35:23.780 "get_zone_info": false, 00:35:23.780 "zone_management": false, 00:35:23.780 "zone_append": false, 00:35:23.780 "compare": false, 00:35:23.780 "compare_and_write": false, 00:35:23.780 "abort": true, 00:35:23.780 "seek_hole": false, 00:35:23.780 "seek_data": false, 00:35:23.780 "copy": true, 00:35:23.780 "nvme_iov_md": false 00:35:23.780 }, 00:35:23.780 "memory_domains": [ 00:35:23.780 { 00:35:23.780 "dma_device_id": "system", 00:35:23.780 "dma_device_type": 1 00:35:23.780 }, 00:35:23.780 { 00:35:23.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:23.780 "dma_device_type": 2 00:35:23.780 } 00:35:23.780 ], 00:35:23.780 "driver_specific": {} 00:35:23.780 }' 00:35:23.780 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:23.780 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:23.780 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:23.780 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:23.780 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:23.780 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:23.780 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:24.039 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:24.039 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:24.039 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:24.039 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:24.039 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:24.039 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:24.039 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:35:24.039 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:24.298 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:24.298 "name": "BaseBdev3", 00:35:24.298 "aliases": [ 00:35:24.298 "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4" 00:35:24.298 ], 00:35:24.298 "product_name": "Malloc disk", 00:35:24.298 "block_size": 512, 00:35:24.298 "num_blocks": 65536, 00:35:24.298 "uuid": "7c5d6f9a-80ad-41e7-82b0-ba53825c11c4", 00:35:24.298 "assigned_rate_limits": { 00:35:24.298 "rw_ios_per_sec": 0, 00:35:24.298 "rw_mbytes_per_sec": 0, 00:35:24.298 "r_mbytes_per_sec": 0, 00:35:24.298 "w_mbytes_per_sec": 0 00:35:24.298 }, 00:35:24.298 "claimed": true, 00:35:24.298 "claim_type": "exclusive_write", 00:35:24.298 "zoned": false, 00:35:24.298 "supported_io_types": { 00:35:24.298 "read": true, 00:35:24.298 "write": true, 00:35:24.298 "unmap": true, 00:35:24.298 "flush": true, 00:35:24.298 "reset": true, 00:35:24.298 "nvme_admin": false, 00:35:24.298 "nvme_io": false, 00:35:24.298 "nvme_io_md": false, 00:35:24.298 "write_zeroes": true, 00:35:24.298 "zcopy": true, 00:35:24.298 "get_zone_info": false, 00:35:24.298 "zone_management": false, 00:35:24.298 "zone_append": false, 00:35:24.298 "compare": false, 00:35:24.298 "compare_and_write": false, 00:35:24.298 "abort": true, 00:35:24.298 "seek_hole": false, 00:35:24.298 "seek_data": false, 00:35:24.298 "copy": true, 00:35:24.298 "nvme_iov_md": false 00:35:24.298 }, 00:35:24.298 "memory_domains": [ 00:35:24.298 { 00:35:24.298 "dma_device_id": "system", 00:35:24.298 "dma_device_type": 1 00:35:24.298 }, 00:35:24.298 { 00:35:24.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:24.298 "dma_device_type": 2 00:35:24.298 } 00:35:24.298 ], 00:35:24.298 "driver_specific": {} 00:35:24.298 }' 00:35:24.298 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:24.298 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:24.298 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:24.298 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:24.298 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:24.298 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:24.298 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:24.298 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:24.557 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:24.557 01:01:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:24.557 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:24.557 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:24.557 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:24.557 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:35:24.557 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:24.830 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:24.830 "name": "BaseBdev4", 00:35:24.830 "aliases": [ 00:35:24.830 "26bc6e16-4c9a-42c9-9118-2276d45614be" 00:35:24.830 ], 00:35:24.830 "product_name": "Malloc disk", 00:35:24.830 "block_size": 512, 00:35:24.830 "num_blocks": 65536, 00:35:24.830 "uuid": "26bc6e16-4c9a-42c9-9118-2276d45614be", 00:35:24.830 "assigned_rate_limits": { 00:35:24.830 "rw_ios_per_sec": 0, 00:35:24.830 "rw_mbytes_per_sec": 0, 00:35:24.830 "r_mbytes_per_sec": 0, 00:35:24.830 "w_mbytes_per_sec": 0 00:35:24.830 }, 00:35:24.830 "claimed": true, 00:35:24.830 "claim_type": "exclusive_write", 00:35:24.830 "zoned": false, 00:35:24.830 "supported_io_types": { 00:35:24.830 "read": true, 00:35:24.830 "write": true, 00:35:24.830 "unmap": true, 00:35:24.830 "flush": true, 00:35:24.830 "reset": true, 00:35:24.830 "nvme_admin": false, 00:35:24.830 "nvme_io": false, 00:35:24.830 "nvme_io_md": false, 00:35:24.830 "write_zeroes": true, 00:35:24.830 "zcopy": true, 00:35:24.830 "get_zone_info": false, 00:35:24.830 "zone_management": false, 00:35:24.830 "zone_append": false, 00:35:24.830 "compare": false, 00:35:24.830 "compare_and_write": false, 00:35:24.830 "abort": true, 00:35:24.830 "seek_hole": false, 00:35:24.830 "seek_data": false, 00:35:24.830 "copy": true, 00:35:24.830 "nvme_iov_md": false 00:35:24.830 }, 00:35:24.830 "memory_domains": [ 00:35:24.830 { 00:35:24.830 "dma_device_id": "system", 00:35:24.830 "dma_device_type": 1 00:35:24.830 }, 00:35:24.830 { 00:35:24.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:24.830 "dma_device_type": 2 00:35:24.830 } 00:35:24.830 ], 00:35:24.830 "driver_specific": {} 00:35:24.830 }' 00:35:24.830 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:24.830 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:24.830 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:24.830 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:24.830 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:25.119 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:25.119 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:25.119 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:25.119 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:25.119 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:25.119 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:25.119 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:25.119 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:25.377 [2024-07-25 01:01:47.906401] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:25.377 [2024-07-25 01:01:47.906560] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:25.377 [2024-07-25 01:01:47.906743] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:25.377 [2024-07-25 01:01:47.907026] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:25.377 [2024-07-25 01:01:47.907115] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:35:25.377 01:01:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 155592 00:35:25.377 01:01:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 155592 ']' 00:35:25.377 01:01:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 155592 00:35:25.377 01:01:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:35:25.377 01:01:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:25.377 01:01:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155592 00:35:25.377 01:01:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:25.377 01:01:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:25.377 01:01:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155592' 00:35:25.377 killing process with pid 155592 00:35:25.378 01:01:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 155592 00:35:25.378 [2024-07-25 01:01:47.955966] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:25.378 01:01:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 155592 00:35:25.946 [2024-07-25 01:01:48.370101] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:27.323 ************************************ 00:35:27.323 END TEST raid5f_state_function_test_sb 00:35:27.323 ************************************ 00:35:27.323 01:01:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:35:27.323 00:35:27.323 real 0m32.201s 00:35:27.323 user 0m57.955s 00:35:27.323 sys 0m4.934s 00:35:27.323 01:01:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:27.323 01:01:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:27.323 01:01:49 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:35:27.323 01:01:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:35:27.323 01:01:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:27.323 01:01:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:27.323 ************************************ 00:35:27.323 START TEST raid5f_superblock_test 00:35:27.323 ************************************ 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 4 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:35:27.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=156670 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 156670 /var/tmp/spdk-raid.sock 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 156670 ']' 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:27.323 01:01:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.323 [2024-07-25 01:01:49.879864] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:35:27.323 [2024-07-25 01:01:49.880930] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156670 ] 00:35:27.581 [2024-07-25 01:01:50.064642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.839 [2024-07-25 01:01:50.304620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.097 [2024-07-25 01:01:50.511001] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:35:28.356 malloc1 00:35:28.356 01:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:28.614 [2024-07-25 01:01:51.172134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:28.614 [2024-07-25 01:01:51.172398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:28.614 [2024-07-25 01:01:51.172485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:35:28.614 [2024-07-25 01:01:51.172767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:28.614 [2024-07-25 01:01:51.175185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:28.614 [2024-07-25 01:01:51.175352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:28.614 pt1 00:35:28.614 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:28.614 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:28.614 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:35:28.614 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:35:28.614 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:28.614 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:28.614 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:28.614 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:28.614 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:35:28.873 malloc2 00:35:28.873 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:29.131 [2024-07-25 01:01:51.594314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:29.131 [2024-07-25 01:01:51.594590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:29.131 [2024-07-25 01:01:51.594668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:29.131 [2024-07-25 01:01:51.594931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:29.131 [2024-07-25 01:01:51.597263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:29.131 [2024-07-25 01:01:51.597433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:29.131 pt2 00:35:29.132 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:29.132 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:29.132 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:35:29.132 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:35:29.132 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:35:29.132 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:29.132 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:29.132 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:29.132 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:35:29.390 malloc3 00:35:29.390 01:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:29.390 [2024-07-25 01:01:51.992155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:29.390 [2024-07-25 01:01:51.992416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:29.390 [2024-07-25 01:01:51.992494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:29.390 [2024-07-25 01:01:51.992588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:29.390 [2024-07-25 01:01:51.994895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:29.390 [2024-07-25 01:01:51.995073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:29.390 pt3 00:35:29.390 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:29.390 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:29.390 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:35:29.390 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:35:29.390 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:35:29.390 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:29.390 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:29.390 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:29.390 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:35:29.649 malloc4 00:35:29.649 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:29.908 [2024-07-25 01:01:52.392097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:29.908 [2024-07-25 01:01:52.392345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:29.908 [2024-07-25 01:01:52.392431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:35:29.908 [2024-07-25 01:01:52.392553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:29.908 [2024-07-25 01:01:52.394899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:29.908 [2024-07-25 01:01:52.395057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:29.908 pt4 00:35:29.908 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:29.908 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:29.908 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:35:30.167 [2024-07-25 01:01:52.576155] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:30.167 [2024-07-25 01:01:52.578296] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:30.167 [2024-07-25 01:01:52.578486] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:30.167 [2024-07-25 01:01:52.578615] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:30.167 [2024-07-25 01:01:52.578960] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:35:30.167 [2024-07-25 01:01:52.579072] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:30.167 [2024-07-25 01:01:52.579273] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:35:30.167 [2024-07-25 01:01:52.587109] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:35:30.167 [2024-07-25 01:01:52.587236] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:35:30.168 [2024-07-25 01:01:52.587517] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:30.168 "name": "raid_bdev1", 00:35:30.168 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:30.168 "strip_size_kb": 64, 00:35:30.168 "state": "online", 00:35:30.168 "raid_level": "raid5f", 00:35:30.168 "superblock": true, 00:35:30.168 "num_base_bdevs": 4, 00:35:30.168 "num_base_bdevs_discovered": 4, 00:35:30.168 "num_base_bdevs_operational": 4, 00:35:30.168 "base_bdevs_list": [ 00:35:30.168 { 00:35:30.168 "name": "pt1", 00:35:30.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:30.168 "is_configured": true, 00:35:30.168 "data_offset": 2048, 00:35:30.168 "data_size": 63488 00:35:30.168 }, 00:35:30.168 { 00:35:30.168 "name": "pt2", 00:35:30.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:30.168 "is_configured": true, 00:35:30.168 "data_offset": 2048, 00:35:30.168 "data_size": 63488 00:35:30.168 }, 00:35:30.168 { 00:35:30.168 "name": "pt3", 00:35:30.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:30.168 "is_configured": true, 00:35:30.168 "data_offset": 2048, 00:35:30.168 "data_size": 63488 00:35:30.168 }, 00:35:30.168 { 00:35:30.168 "name": "pt4", 00:35:30.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:30.168 "is_configured": true, 00:35:30.168 "data_offset": 2048, 00:35:30.168 "data_size": 63488 00:35:30.168 } 00:35:30.168 ] 00:35:30.168 }' 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:30.168 01:01:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.736 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:35:30.736 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:30.736 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:30.736 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:30.736 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:30.736 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:35:30.736 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:30.736 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:30.994 [2024-07-25 01:01:53.601287] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:30.994 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:30.994 "name": "raid_bdev1", 00:35:30.994 "aliases": [ 00:35:30.994 "7af6ef91-ae11-402b-ab92-001346d7129b" 00:35:30.994 ], 00:35:30.994 "product_name": "Raid Volume", 00:35:30.994 "block_size": 512, 00:35:30.994 "num_blocks": 190464, 00:35:30.994 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:30.994 "assigned_rate_limits": { 00:35:30.994 "rw_ios_per_sec": 0, 00:35:30.994 "rw_mbytes_per_sec": 0, 00:35:30.994 "r_mbytes_per_sec": 0, 00:35:30.994 "w_mbytes_per_sec": 0 00:35:30.994 }, 00:35:30.994 "claimed": false, 00:35:30.994 "zoned": false, 00:35:30.994 "supported_io_types": { 00:35:30.994 "read": true, 00:35:30.994 "write": true, 00:35:30.995 "unmap": false, 00:35:30.995 "flush": false, 00:35:30.995 "reset": true, 00:35:30.995 "nvme_admin": false, 00:35:30.995 "nvme_io": false, 00:35:30.995 "nvme_io_md": false, 00:35:30.995 "write_zeroes": true, 00:35:30.995 "zcopy": false, 00:35:30.995 "get_zone_info": false, 00:35:30.995 "zone_management": false, 00:35:30.995 "zone_append": false, 00:35:30.995 "compare": false, 00:35:30.995 "compare_and_write": false, 00:35:30.995 "abort": false, 00:35:30.995 "seek_hole": false, 00:35:30.995 "seek_data": false, 00:35:30.995 "copy": false, 00:35:30.995 "nvme_iov_md": false 00:35:30.995 }, 00:35:30.995 "driver_specific": { 00:35:30.995 "raid": { 00:35:30.995 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:30.995 "strip_size_kb": 64, 00:35:30.995 "state": "online", 00:35:30.995 "raid_level": "raid5f", 00:35:30.995 "superblock": true, 00:35:30.995 "num_base_bdevs": 4, 00:35:30.995 "num_base_bdevs_discovered": 4, 00:35:30.995 "num_base_bdevs_operational": 4, 00:35:30.995 "base_bdevs_list": [ 00:35:30.995 { 00:35:30.995 "name": "pt1", 00:35:30.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:30.995 "is_configured": true, 00:35:30.995 "data_offset": 2048, 00:35:30.995 "data_size": 63488 00:35:30.995 }, 00:35:30.995 { 00:35:30.995 "name": "pt2", 00:35:30.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:30.995 "is_configured": true, 00:35:30.995 "data_offset": 2048, 00:35:30.995 "data_size": 63488 00:35:30.995 }, 00:35:30.995 { 00:35:30.995 "name": "pt3", 00:35:30.995 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:30.995 "is_configured": true, 00:35:30.995 "data_offset": 2048, 00:35:30.995 "data_size": 63488 00:35:30.995 }, 00:35:30.995 { 00:35:30.995 "name": "pt4", 00:35:30.995 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:30.995 "is_configured": true, 00:35:30.995 "data_offset": 2048, 00:35:30.995 "data_size": 63488 00:35:30.995 } 00:35:30.995 ] 00:35:30.995 } 00:35:30.995 } 00:35:30.995 }' 00:35:30.995 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:31.253 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:31.253 pt2 00:35:31.253 pt3 00:35:31.253 pt4' 00:35:31.253 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:31.253 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:31.253 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:31.511 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:31.511 "name": "pt1", 00:35:31.511 "aliases": [ 00:35:31.511 "00000000-0000-0000-0000-000000000001" 00:35:31.511 ], 00:35:31.511 "product_name": "passthru", 00:35:31.511 "block_size": 512, 00:35:31.511 "num_blocks": 65536, 00:35:31.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:31.511 "assigned_rate_limits": { 00:35:31.511 "rw_ios_per_sec": 0, 00:35:31.511 "rw_mbytes_per_sec": 0, 00:35:31.511 "r_mbytes_per_sec": 0, 00:35:31.511 "w_mbytes_per_sec": 0 00:35:31.511 }, 00:35:31.511 "claimed": true, 00:35:31.511 "claim_type": "exclusive_write", 00:35:31.511 "zoned": false, 00:35:31.511 "supported_io_types": { 00:35:31.511 "read": true, 00:35:31.511 "write": true, 00:35:31.511 "unmap": true, 00:35:31.511 "flush": true, 00:35:31.511 "reset": true, 00:35:31.511 "nvme_admin": false, 00:35:31.511 "nvme_io": false, 00:35:31.511 "nvme_io_md": false, 00:35:31.511 "write_zeroes": true, 00:35:31.511 "zcopy": true, 00:35:31.511 "get_zone_info": false, 00:35:31.511 "zone_management": false, 00:35:31.511 "zone_append": false, 00:35:31.511 "compare": false, 00:35:31.511 "compare_and_write": false, 00:35:31.511 "abort": true, 00:35:31.511 "seek_hole": false, 00:35:31.511 "seek_data": false, 00:35:31.511 "copy": true, 00:35:31.511 "nvme_iov_md": false 00:35:31.511 }, 00:35:31.511 "memory_domains": [ 00:35:31.511 { 00:35:31.511 "dma_device_id": "system", 00:35:31.511 "dma_device_type": 1 00:35:31.511 }, 00:35:31.511 { 00:35:31.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:31.511 "dma_device_type": 2 00:35:31.511 } 00:35:31.511 ], 00:35:31.511 "driver_specific": { 00:35:31.511 "passthru": { 00:35:31.511 "name": "pt1", 00:35:31.511 "base_bdev_name": "malloc1" 00:35:31.511 } 00:35:31.511 } 00:35:31.511 }' 00:35:31.511 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:31.511 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:31.511 01:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:31.511 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:31.511 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:31.511 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:31.511 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:31.511 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:31.780 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:31.780 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:31.780 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:31.780 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:31.780 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:31.780 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:31.780 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:31.780 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:31.780 "name": "pt2", 00:35:31.780 "aliases": [ 00:35:31.780 "00000000-0000-0000-0000-000000000002" 00:35:31.780 ], 00:35:31.780 "product_name": "passthru", 00:35:31.780 "block_size": 512, 00:35:31.780 "num_blocks": 65536, 00:35:31.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:31.780 "assigned_rate_limits": { 00:35:31.780 "rw_ios_per_sec": 0, 00:35:31.780 "rw_mbytes_per_sec": 0, 00:35:31.780 "r_mbytes_per_sec": 0, 00:35:31.780 "w_mbytes_per_sec": 0 00:35:31.780 }, 00:35:31.780 "claimed": true, 00:35:31.780 "claim_type": "exclusive_write", 00:35:31.780 "zoned": false, 00:35:31.780 "supported_io_types": { 00:35:31.780 "read": true, 00:35:31.780 "write": true, 00:35:31.780 "unmap": true, 00:35:31.780 "flush": true, 00:35:31.780 "reset": true, 00:35:31.780 "nvme_admin": false, 00:35:31.780 "nvme_io": false, 00:35:31.780 "nvme_io_md": false, 00:35:31.780 "write_zeroes": true, 00:35:31.780 "zcopy": true, 00:35:31.780 "get_zone_info": false, 00:35:31.780 "zone_management": false, 00:35:31.780 "zone_append": false, 00:35:31.780 "compare": false, 00:35:31.780 "compare_and_write": false, 00:35:31.780 "abort": true, 00:35:31.780 "seek_hole": false, 00:35:31.780 "seek_data": false, 00:35:31.780 "copy": true, 00:35:31.780 "nvme_iov_md": false 00:35:31.780 }, 00:35:31.780 "memory_domains": [ 00:35:31.780 { 00:35:31.780 "dma_device_id": "system", 00:35:31.780 "dma_device_type": 1 00:35:31.780 }, 00:35:31.780 { 00:35:31.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:31.780 "dma_device_type": 2 00:35:31.780 } 00:35:31.780 ], 00:35:31.780 "driver_specific": { 00:35:31.780 "passthru": { 00:35:31.780 "name": "pt2", 00:35:31.780 "base_bdev_name": "malloc2" 00:35:31.780 } 00:35:31.780 } 00:35:31.780 }' 00:35:31.780 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:32.053 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:32.053 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:32.053 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:32.053 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:32.053 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:32.053 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:32.053 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:32.053 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:32.053 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:32.053 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:32.312 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:32.312 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:32.312 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:35:32.312 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:32.312 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:32.312 "name": "pt3", 00:35:32.312 "aliases": [ 00:35:32.312 "00000000-0000-0000-0000-000000000003" 00:35:32.312 ], 00:35:32.312 "product_name": "passthru", 00:35:32.312 "block_size": 512, 00:35:32.312 "num_blocks": 65536, 00:35:32.312 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:32.312 "assigned_rate_limits": { 00:35:32.312 "rw_ios_per_sec": 0, 00:35:32.312 "rw_mbytes_per_sec": 0, 00:35:32.312 "r_mbytes_per_sec": 0, 00:35:32.312 "w_mbytes_per_sec": 0 00:35:32.312 }, 00:35:32.312 "claimed": true, 00:35:32.312 "claim_type": "exclusive_write", 00:35:32.312 "zoned": false, 00:35:32.312 "supported_io_types": { 00:35:32.312 "read": true, 00:35:32.312 "write": true, 00:35:32.312 "unmap": true, 00:35:32.312 "flush": true, 00:35:32.312 "reset": true, 00:35:32.312 "nvme_admin": false, 00:35:32.312 "nvme_io": false, 00:35:32.312 "nvme_io_md": false, 00:35:32.312 "write_zeroes": true, 00:35:32.312 "zcopy": true, 00:35:32.312 "get_zone_info": false, 00:35:32.312 "zone_management": false, 00:35:32.312 "zone_append": false, 00:35:32.312 "compare": false, 00:35:32.312 "compare_and_write": false, 00:35:32.312 "abort": true, 00:35:32.312 "seek_hole": false, 00:35:32.312 "seek_data": false, 00:35:32.312 "copy": true, 00:35:32.312 "nvme_iov_md": false 00:35:32.312 }, 00:35:32.312 "memory_domains": [ 00:35:32.312 { 00:35:32.312 "dma_device_id": "system", 00:35:32.312 "dma_device_type": 1 00:35:32.312 }, 00:35:32.312 { 00:35:32.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:32.312 "dma_device_type": 2 00:35:32.312 } 00:35:32.312 ], 00:35:32.312 "driver_specific": { 00:35:32.312 "passthru": { 00:35:32.312 "name": "pt3", 00:35:32.312 "base_bdev_name": "malloc3" 00:35:32.312 } 00:35:32.312 } 00:35:32.312 }' 00:35:32.312 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:32.312 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:32.571 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:32.571 01:01:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:32.571 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:32.571 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:32.572 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:32.572 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:32.572 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:32.572 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:32.572 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:32.830 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:32.830 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:32.830 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:32.830 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:35:33.089 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:33.089 "name": "pt4", 00:35:33.089 "aliases": [ 00:35:33.089 "00000000-0000-0000-0000-000000000004" 00:35:33.090 ], 00:35:33.090 "product_name": "passthru", 00:35:33.090 "block_size": 512, 00:35:33.090 "num_blocks": 65536, 00:35:33.090 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:33.090 "assigned_rate_limits": { 00:35:33.090 "rw_ios_per_sec": 0, 00:35:33.090 "rw_mbytes_per_sec": 0, 00:35:33.090 "r_mbytes_per_sec": 0, 00:35:33.090 "w_mbytes_per_sec": 0 00:35:33.090 }, 00:35:33.090 "claimed": true, 00:35:33.090 "claim_type": "exclusive_write", 00:35:33.090 "zoned": false, 00:35:33.090 "supported_io_types": { 00:35:33.090 "read": true, 00:35:33.090 "write": true, 00:35:33.090 "unmap": true, 00:35:33.090 "flush": true, 00:35:33.090 "reset": true, 00:35:33.090 "nvme_admin": false, 00:35:33.090 "nvme_io": false, 00:35:33.090 "nvme_io_md": false, 00:35:33.090 "write_zeroes": true, 00:35:33.090 "zcopy": true, 00:35:33.090 "get_zone_info": false, 00:35:33.090 "zone_management": false, 00:35:33.090 "zone_append": false, 00:35:33.090 "compare": false, 00:35:33.090 "compare_and_write": false, 00:35:33.090 "abort": true, 00:35:33.090 "seek_hole": false, 00:35:33.090 "seek_data": false, 00:35:33.090 "copy": true, 00:35:33.090 "nvme_iov_md": false 00:35:33.090 }, 00:35:33.090 "memory_domains": [ 00:35:33.090 { 00:35:33.090 "dma_device_id": "system", 00:35:33.090 "dma_device_type": 1 00:35:33.090 }, 00:35:33.090 { 00:35:33.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:33.090 "dma_device_type": 2 00:35:33.090 } 00:35:33.090 ], 00:35:33.090 "driver_specific": { 00:35:33.090 "passthru": { 00:35:33.090 "name": "pt4", 00:35:33.090 "base_bdev_name": "malloc4" 00:35:33.090 } 00:35:33.090 } 00:35:33.090 }' 00:35:33.090 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:33.090 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:33.090 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:33.090 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:33.090 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:33.090 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:33.090 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:33.090 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:33.348 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:33.348 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:33.348 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:33.348 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:33.348 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:35:33.348 01:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:33.348 [2024-07-25 01:01:55.997754] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:33.607 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7af6ef91-ae11-402b-ab92-001346d7129b 00:35:33.607 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 7af6ef91-ae11-402b-ab92-001346d7129b ']' 00:35:33.607 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:33.607 [2024-07-25 01:01:56.253652] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:33.607 [2024-07-25 01:01:56.253855] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:33.607 [2024-07-25 01:01:56.254063] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:33.608 [2024-07-25 01:01:56.254254] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:33.608 [2024-07-25 01:01:56.254342] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:35:33.866 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:35:33.866 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:33.866 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:35:33.866 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:35:33.866 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:33.866 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:34.125 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:34.125 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:34.384 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:34.384 01:01:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:35:34.643 01:01:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:34.643 01:01:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:35:34.902 01:01:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:34.903 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:35:35.162 [2024-07-25 01:01:57.705847] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:35.162 [2024-07-25 01:01:57.707936] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:35.162 [2024-07-25 01:01:57.708117] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:35:35.162 [2024-07-25 01:01:57.708181] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:35:35.162 [2024-07-25 01:01:57.708307] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:35.162 [2024-07-25 01:01:57.708434] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:35.162 [2024-07-25 01:01:57.708596] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:35:35.162 [2024-07-25 01:01:57.708753] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:35:35.162 [2024-07-25 01:01:57.708869] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:35.162 [2024-07-25 01:01:57.708909] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:35:35.162 request: 00:35:35.162 { 00:35:35.162 "name": "raid_bdev1", 00:35:35.162 "raid_level": "raid5f", 00:35:35.162 "base_bdevs": [ 00:35:35.162 "malloc1", 00:35:35.162 "malloc2", 00:35:35.162 "malloc3", 00:35:35.162 "malloc4" 00:35:35.162 ], 00:35:35.162 "strip_size_kb": 64, 00:35:35.162 "superblock": false, 00:35:35.162 "method": "bdev_raid_create", 00:35:35.162 "req_id": 1 00:35:35.162 } 00:35:35.162 Got JSON-RPC error response 00:35:35.162 response: 00:35:35.162 { 00:35:35.162 "code": -17, 00:35:35.162 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:35.162 } 00:35:35.162 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:35:35.162 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:35.162 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:35.162 01:01:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:35.162 01:01:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:35:35.162 01:01:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.422 01:01:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:35:35.422 01:01:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:35:35.422 01:01:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:35.681 [2024-07-25 01:01:58.145932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:35.681 [2024-07-25 01:01:58.146206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:35.681 [2024-07-25 01:01:58.146287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:35.681 [2024-07-25 01:01:58.146404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:35.681 [2024-07-25 01:01:58.148641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:35.681 [2024-07-25 01:01:58.148803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:35.681 [2024-07-25 01:01:58.149000] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:35.681 [2024-07-25 01:01:58.149170] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:35.681 pt1 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.681 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:35.940 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:35.940 "name": "raid_bdev1", 00:35:35.940 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:35.940 "strip_size_kb": 64, 00:35:35.940 "state": "configuring", 00:35:35.940 "raid_level": "raid5f", 00:35:35.940 "superblock": true, 00:35:35.940 "num_base_bdevs": 4, 00:35:35.940 "num_base_bdevs_discovered": 1, 00:35:35.940 "num_base_bdevs_operational": 4, 00:35:35.940 "base_bdevs_list": [ 00:35:35.940 { 00:35:35.940 "name": "pt1", 00:35:35.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:35.941 "is_configured": true, 00:35:35.941 "data_offset": 2048, 00:35:35.941 "data_size": 63488 00:35:35.941 }, 00:35:35.941 { 00:35:35.941 "name": null, 00:35:35.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:35.941 "is_configured": false, 00:35:35.941 "data_offset": 2048, 00:35:35.941 "data_size": 63488 00:35:35.941 }, 00:35:35.941 { 00:35:35.941 "name": null, 00:35:35.941 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:35.941 "is_configured": false, 00:35:35.941 "data_offset": 2048, 00:35:35.941 "data_size": 63488 00:35:35.941 }, 00:35:35.941 { 00:35:35.941 "name": null, 00:35:35.941 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:35.941 "is_configured": false, 00:35:35.941 "data_offset": 2048, 00:35:35.941 "data_size": 63488 00:35:35.941 } 00:35:35.941 ] 00:35:35.941 }' 00:35:35.941 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:35.941 01:01:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.200 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:35:36.200 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:36.458 [2024-07-25 01:01:58.974099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:36.458 [2024-07-25 01:01:58.974342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:36.458 [2024-07-25 01:01:58.974418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:35:36.458 [2024-07-25 01:01:58.974527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:36.458 [2024-07-25 01:01:58.975002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:36.458 [2024-07-25 01:01:58.975142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:36.458 [2024-07-25 01:01:58.975338] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:36.458 [2024-07-25 01:01:58.975432] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:36.458 pt2 00:35:36.458 01:01:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:36.718 [2024-07-25 01:01:59.158192] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:36.718 "name": "raid_bdev1", 00:35:36.718 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:36.718 "strip_size_kb": 64, 00:35:36.718 "state": "configuring", 00:35:36.718 "raid_level": "raid5f", 00:35:36.718 "superblock": true, 00:35:36.718 "num_base_bdevs": 4, 00:35:36.718 "num_base_bdevs_discovered": 1, 00:35:36.718 "num_base_bdevs_operational": 4, 00:35:36.718 "base_bdevs_list": [ 00:35:36.718 { 00:35:36.718 "name": "pt1", 00:35:36.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:36.718 "is_configured": true, 00:35:36.718 "data_offset": 2048, 00:35:36.718 "data_size": 63488 00:35:36.718 }, 00:35:36.718 { 00:35:36.718 "name": null, 00:35:36.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:36.718 "is_configured": false, 00:35:36.718 "data_offset": 2048, 00:35:36.718 "data_size": 63488 00:35:36.718 }, 00:35:36.718 { 00:35:36.718 "name": null, 00:35:36.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:36.718 "is_configured": false, 00:35:36.718 "data_offset": 2048, 00:35:36.718 "data_size": 63488 00:35:36.718 }, 00:35:36.718 { 00:35:36.718 "name": null, 00:35:36.718 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:36.718 "is_configured": false, 00:35:36.718 "data_offset": 2048, 00:35:36.718 "data_size": 63488 00:35:36.718 } 00:35:36.718 ] 00:35:36.718 }' 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:36.718 01:01:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.655 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:35:37.655 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:37.655 01:01:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:37.655 [2024-07-25 01:02:00.126339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:37.655 [2024-07-25 01:02:00.126593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:37.655 [2024-07-25 01:02:00.126662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:37.655 [2024-07-25 01:02:00.126772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:37.655 [2024-07-25 01:02:00.127228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:37.655 [2024-07-25 01:02:00.127371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:37.655 [2024-07-25 01:02:00.127554] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:37.655 [2024-07-25 01:02:00.127675] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:37.655 pt2 00:35:37.655 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:35:37.655 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:37.655 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:37.914 [2024-07-25 01:02:00.362395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:37.914 [2024-07-25 01:02:00.362594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:37.914 [2024-07-25 01:02:00.362694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:35:37.914 [2024-07-25 01:02:00.362810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:37.914 [2024-07-25 01:02:00.363325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:37.914 [2024-07-25 01:02:00.363472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:37.914 [2024-07-25 01:02:00.363661] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:37.914 [2024-07-25 01:02:00.363778] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:37.914 pt3 00:35:37.914 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:35:37.914 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:37.914 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:37.914 [2024-07-25 01:02:00.550413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:37.914 [2024-07-25 01:02:00.550614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:37.914 [2024-07-25 01:02:00.550673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:35:37.914 [2024-07-25 01:02:00.550778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:37.914 [2024-07-25 01:02:00.551294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:37.914 [2024-07-25 01:02:00.551442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:37.914 [2024-07-25 01:02:00.551620] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:37.914 [2024-07-25 01:02:00.551723] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:37.914 [2024-07-25 01:02:00.551891] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:35:37.914 [2024-07-25 01:02:00.552058] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:37.914 [2024-07-25 01:02:00.552197] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:37.914 [2024-07-25 01:02:00.559635] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:35:37.914 [2024-07-25 01:02:00.559753] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:35:37.914 [2024-07-25 01:02:00.559993] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:37.914 pt4 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:38.173 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:38.432 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:38.432 "name": "raid_bdev1", 00:35:38.433 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:38.433 "strip_size_kb": 64, 00:35:38.433 "state": "online", 00:35:38.433 "raid_level": "raid5f", 00:35:38.433 "superblock": true, 00:35:38.433 "num_base_bdevs": 4, 00:35:38.433 "num_base_bdevs_discovered": 4, 00:35:38.433 "num_base_bdevs_operational": 4, 00:35:38.433 "base_bdevs_list": [ 00:35:38.433 { 00:35:38.433 "name": "pt1", 00:35:38.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:38.433 "is_configured": true, 00:35:38.433 "data_offset": 2048, 00:35:38.433 "data_size": 63488 00:35:38.433 }, 00:35:38.433 { 00:35:38.433 "name": "pt2", 00:35:38.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:38.433 "is_configured": true, 00:35:38.433 "data_offset": 2048, 00:35:38.433 "data_size": 63488 00:35:38.433 }, 00:35:38.433 { 00:35:38.433 "name": "pt3", 00:35:38.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:38.433 "is_configured": true, 00:35:38.433 "data_offset": 2048, 00:35:38.433 "data_size": 63488 00:35:38.433 }, 00:35:38.433 { 00:35:38.433 "name": "pt4", 00:35:38.433 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:38.433 "is_configured": true, 00:35:38.433 "data_offset": 2048, 00:35:38.433 "data_size": 63488 00:35:38.433 } 00:35:38.433 ] 00:35:38.433 }' 00:35:38.433 01:02:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:38.433 01:02:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.711 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:35:38.711 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:38.711 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:38.711 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:38.711 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:38.711 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:35:38.992 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:38.992 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:38.992 [2024-07-25 01:02:01.530351] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:38.992 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:38.992 "name": "raid_bdev1", 00:35:38.992 "aliases": [ 00:35:38.992 "7af6ef91-ae11-402b-ab92-001346d7129b" 00:35:38.992 ], 00:35:38.992 "product_name": "Raid Volume", 00:35:38.992 "block_size": 512, 00:35:38.992 "num_blocks": 190464, 00:35:38.992 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:38.992 "assigned_rate_limits": { 00:35:38.992 "rw_ios_per_sec": 0, 00:35:38.992 "rw_mbytes_per_sec": 0, 00:35:38.992 "r_mbytes_per_sec": 0, 00:35:38.992 "w_mbytes_per_sec": 0 00:35:38.992 }, 00:35:38.992 "claimed": false, 00:35:38.992 "zoned": false, 00:35:38.992 "supported_io_types": { 00:35:38.992 "read": true, 00:35:38.992 "write": true, 00:35:38.992 "unmap": false, 00:35:38.992 "flush": false, 00:35:38.992 "reset": true, 00:35:38.992 "nvme_admin": false, 00:35:38.992 "nvme_io": false, 00:35:38.992 "nvme_io_md": false, 00:35:38.992 "write_zeroes": true, 00:35:38.992 "zcopy": false, 00:35:38.992 "get_zone_info": false, 00:35:38.992 "zone_management": false, 00:35:38.992 "zone_append": false, 00:35:38.992 "compare": false, 00:35:38.992 "compare_and_write": false, 00:35:38.992 "abort": false, 00:35:38.992 "seek_hole": false, 00:35:38.992 "seek_data": false, 00:35:38.992 "copy": false, 00:35:38.992 "nvme_iov_md": false 00:35:38.992 }, 00:35:38.992 "driver_specific": { 00:35:38.992 "raid": { 00:35:38.992 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:38.992 "strip_size_kb": 64, 00:35:38.992 "state": "online", 00:35:38.992 "raid_level": "raid5f", 00:35:38.992 "superblock": true, 00:35:38.992 "num_base_bdevs": 4, 00:35:38.992 "num_base_bdevs_discovered": 4, 00:35:38.992 "num_base_bdevs_operational": 4, 00:35:38.992 "base_bdevs_list": [ 00:35:38.992 { 00:35:38.992 "name": "pt1", 00:35:38.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:38.992 "is_configured": true, 00:35:38.992 "data_offset": 2048, 00:35:38.992 "data_size": 63488 00:35:38.992 }, 00:35:38.992 { 00:35:38.992 "name": "pt2", 00:35:38.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:38.992 "is_configured": true, 00:35:38.992 "data_offset": 2048, 00:35:38.992 "data_size": 63488 00:35:38.992 }, 00:35:38.992 { 00:35:38.992 "name": "pt3", 00:35:38.992 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:38.992 "is_configured": true, 00:35:38.992 "data_offset": 2048, 00:35:38.992 "data_size": 63488 00:35:38.992 }, 00:35:38.992 { 00:35:38.992 "name": "pt4", 00:35:38.992 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:38.992 "is_configured": true, 00:35:38.992 "data_offset": 2048, 00:35:38.992 "data_size": 63488 00:35:38.992 } 00:35:38.992 ] 00:35:38.992 } 00:35:38.992 } 00:35:38.992 }' 00:35:38.992 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:38.992 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:38.992 pt2 00:35:38.992 pt3 00:35:38.992 pt4' 00:35:38.992 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:38.992 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:38.993 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:39.251 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:39.251 "name": "pt1", 00:35:39.251 "aliases": [ 00:35:39.251 "00000000-0000-0000-0000-000000000001" 00:35:39.251 ], 00:35:39.251 "product_name": "passthru", 00:35:39.251 "block_size": 512, 00:35:39.251 "num_blocks": 65536, 00:35:39.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:39.251 "assigned_rate_limits": { 00:35:39.251 "rw_ios_per_sec": 0, 00:35:39.251 "rw_mbytes_per_sec": 0, 00:35:39.251 "r_mbytes_per_sec": 0, 00:35:39.251 "w_mbytes_per_sec": 0 00:35:39.251 }, 00:35:39.251 "claimed": true, 00:35:39.251 "claim_type": "exclusive_write", 00:35:39.251 "zoned": false, 00:35:39.251 "supported_io_types": { 00:35:39.251 "read": true, 00:35:39.251 "write": true, 00:35:39.251 "unmap": true, 00:35:39.251 "flush": true, 00:35:39.251 "reset": true, 00:35:39.251 "nvme_admin": false, 00:35:39.251 "nvme_io": false, 00:35:39.251 "nvme_io_md": false, 00:35:39.251 "write_zeroes": true, 00:35:39.251 "zcopy": true, 00:35:39.251 "get_zone_info": false, 00:35:39.251 "zone_management": false, 00:35:39.251 "zone_append": false, 00:35:39.251 "compare": false, 00:35:39.251 "compare_and_write": false, 00:35:39.251 "abort": true, 00:35:39.251 "seek_hole": false, 00:35:39.251 "seek_data": false, 00:35:39.251 "copy": true, 00:35:39.251 "nvme_iov_md": false 00:35:39.251 }, 00:35:39.251 "memory_domains": [ 00:35:39.251 { 00:35:39.251 "dma_device_id": "system", 00:35:39.251 "dma_device_type": 1 00:35:39.251 }, 00:35:39.251 { 00:35:39.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:39.251 "dma_device_type": 2 00:35:39.251 } 00:35:39.251 ], 00:35:39.251 "driver_specific": { 00:35:39.251 "passthru": { 00:35:39.251 "name": "pt1", 00:35:39.251 "base_bdev_name": "malloc1" 00:35:39.251 } 00:35:39.251 } 00:35:39.251 }' 00:35:39.251 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:39.510 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:39.510 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:39.510 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:39.510 01:02:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:39.510 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:39.510 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:39.510 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:39.510 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:39.510 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:39.510 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:39.769 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:39.769 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:39.769 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:39.769 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:40.028 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:40.028 "name": "pt2", 00:35:40.028 "aliases": [ 00:35:40.028 "00000000-0000-0000-0000-000000000002" 00:35:40.028 ], 00:35:40.028 "product_name": "passthru", 00:35:40.028 "block_size": 512, 00:35:40.028 "num_blocks": 65536, 00:35:40.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:40.028 "assigned_rate_limits": { 00:35:40.028 "rw_ios_per_sec": 0, 00:35:40.028 "rw_mbytes_per_sec": 0, 00:35:40.028 "r_mbytes_per_sec": 0, 00:35:40.028 "w_mbytes_per_sec": 0 00:35:40.028 }, 00:35:40.028 "claimed": true, 00:35:40.028 "claim_type": "exclusive_write", 00:35:40.028 "zoned": false, 00:35:40.028 "supported_io_types": { 00:35:40.028 "read": true, 00:35:40.028 "write": true, 00:35:40.028 "unmap": true, 00:35:40.028 "flush": true, 00:35:40.028 "reset": true, 00:35:40.028 "nvme_admin": false, 00:35:40.028 "nvme_io": false, 00:35:40.028 "nvme_io_md": false, 00:35:40.028 "write_zeroes": true, 00:35:40.028 "zcopy": true, 00:35:40.028 "get_zone_info": false, 00:35:40.028 "zone_management": false, 00:35:40.028 "zone_append": false, 00:35:40.028 "compare": false, 00:35:40.028 "compare_and_write": false, 00:35:40.028 "abort": true, 00:35:40.028 "seek_hole": false, 00:35:40.028 "seek_data": false, 00:35:40.028 "copy": true, 00:35:40.028 "nvme_iov_md": false 00:35:40.028 }, 00:35:40.028 "memory_domains": [ 00:35:40.028 { 00:35:40.028 "dma_device_id": "system", 00:35:40.028 "dma_device_type": 1 00:35:40.028 }, 00:35:40.028 { 00:35:40.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:40.028 "dma_device_type": 2 00:35:40.028 } 00:35:40.028 ], 00:35:40.028 "driver_specific": { 00:35:40.028 "passthru": { 00:35:40.028 "name": "pt2", 00:35:40.028 "base_bdev_name": "malloc2" 00:35:40.028 } 00:35:40.028 } 00:35:40.028 }' 00:35:40.028 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:40.028 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:40.028 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:40.028 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:40.028 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:40.028 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:40.028 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:40.028 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:40.287 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:40.287 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:40.287 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:40.287 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:40.287 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:40.287 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:35:40.287 01:02:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:40.545 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:40.545 "name": "pt3", 00:35:40.545 "aliases": [ 00:35:40.545 "00000000-0000-0000-0000-000000000003" 00:35:40.545 ], 00:35:40.545 "product_name": "passthru", 00:35:40.545 "block_size": 512, 00:35:40.545 "num_blocks": 65536, 00:35:40.545 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:40.545 "assigned_rate_limits": { 00:35:40.545 "rw_ios_per_sec": 0, 00:35:40.545 "rw_mbytes_per_sec": 0, 00:35:40.545 "r_mbytes_per_sec": 0, 00:35:40.545 "w_mbytes_per_sec": 0 00:35:40.545 }, 00:35:40.545 "claimed": true, 00:35:40.545 "claim_type": "exclusive_write", 00:35:40.545 "zoned": false, 00:35:40.545 "supported_io_types": { 00:35:40.545 "read": true, 00:35:40.545 "write": true, 00:35:40.545 "unmap": true, 00:35:40.545 "flush": true, 00:35:40.546 "reset": true, 00:35:40.546 "nvme_admin": false, 00:35:40.546 "nvme_io": false, 00:35:40.546 "nvme_io_md": false, 00:35:40.546 "write_zeroes": true, 00:35:40.546 "zcopy": true, 00:35:40.546 "get_zone_info": false, 00:35:40.546 "zone_management": false, 00:35:40.546 "zone_append": false, 00:35:40.546 "compare": false, 00:35:40.546 "compare_and_write": false, 00:35:40.546 "abort": true, 00:35:40.546 "seek_hole": false, 00:35:40.546 "seek_data": false, 00:35:40.546 "copy": true, 00:35:40.546 "nvme_iov_md": false 00:35:40.546 }, 00:35:40.546 "memory_domains": [ 00:35:40.546 { 00:35:40.546 "dma_device_id": "system", 00:35:40.546 "dma_device_type": 1 00:35:40.546 }, 00:35:40.546 { 00:35:40.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:40.546 "dma_device_type": 2 00:35:40.546 } 00:35:40.546 ], 00:35:40.546 "driver_specific": { 00:35:40.546 "passthru": { 00:35:40.546 "name": "pt3", 00:35:40.546 "base_bdev_name": "malloc3" 00:35:40.546 } 00:35:40.546 } 00:35:40.546 }' 00:35:40.546 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:40.546 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:40.546 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:40.546 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:40.546 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:40.546 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:40.546 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:40.815 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:40.815 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:40.815 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:40.815 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:40.815 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:40.815 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:40.815 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:40.815 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:35:41.074 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:41.074 "name": "pt4", 00:35:41.074 "aliases": [ 00:35:41.074 "00000000-0000-0000-0000-000000000004" 00:35:41.074 ], 00:35:41.074 "product_name": "passthru", 00:35:41.074 "block_size": 512, 00:35:41.074 "num_blocks": 65536, 00:35:41.074 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:41.074 "assigned_rate_limits": { 00:35:41.074 "rw_ios_per_sec": 0, 00:35:41.074 "rw_mbytes_per_sec": 0, 00:35:41.074 "r_mbytes_per_sec": 0, 00:35:41.074 "w_mbytes_per_sec": 0 00:35:41.074 }, 00:35:41.074 "claimed": true, 00:35:41.074 "claim_type": "exclusive_write", 00:35:41.074 "zoned": false, 00:35:41.074 "supported_io_types": { 00:35:41.074 "read": true, 00:35:41.074 "write": true, 00:35:41.074 "unmap": true, 00:35:41.074 "flush": true, 00:35:41.074 "reset": true, 00:35:41.074 "nvme_admin": false, 00:35:41.074 "nvme_io": false, 00:35:41.074 "nvme_io_md": false, 00:35:41.074 "write_zeroes": true, 00:35:41.074 "zcopy": true, 00:35:41.074 "get_zone_info": false, 00:35:41.074 "zone_management": false, 00:35:41.074 "zone_append": false, 00:35:41.074 "compare": false, 00:35:41.074 "compare_and_write": false, 00:35:41.074 "abort": true, 00:35:41.074 "seek_hole": false, 00:35:41.074 "seek_data": false, 00:35:41.074 "copy": true, 00:35:41.074 "nvme_iov_md": false 00:35:41.074 }, 00:35:41.074 "memory_domains": [ 00:35:41.074 { 00:35:41.074 "dma_device_id": "system", 00:35:41.074 "dma_device_type": 1 00:35:41.074 }, 00:35:41.074 { 00:35:41.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:41.074 "dma_device_type": 2 00:35:41.074 } 00:35:41.074 ], 00:35:41.074 "driver_specific": { 00:35:41.074 "passthru": { 00:35:41.074 "name": "pt4", 00:35:41.074 "base_bdev_name": "malloc4" 00:35:41.074 } 00:35:41.074 } 00:35:41.074 }' 00:35:41.074 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:41.074 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:41.074 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:41.075 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:41.332 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:41.332 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:41.332 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:41.332 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:41.332 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:41.332 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:41.332 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:41.332 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:41.332 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:35:41.332 01:02:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:41.591 [2024-07-25 01:02:04.163129] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:41.591 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 7af6ef91-ae11-402b-ab92-001346d7129b '!=' 7af6ef91-ae11-402b-ab92-001346d7129b ']' 00:35:41.591 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:35:41.591 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:41.591 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:35:41.591 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:41.850 [2024-07-25 01:02:04.439056] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:41.850 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:42.109 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:42.109 "name": "raid_bdev1", 00:35:42.109 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:42.109 "strip_size_kb": 64, 00:35:42.109 "state": "online", 00:35:42.109 "raid_level": "raid5f", 00:35:42.109 "superblock": true, 00:35:42.109 "num_base_bdevs": 4, 00:35:42.109 "num_base_bdevs_discovered": 3, 00:35:42.109 "num_base_bdevs_operational": 3, 00:35:42.109 "base_bdevs_list": [ 00:35:42.109 { 00:35:42.109 "name": null, 00:35:42.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:42.109 "is_configured": false, 00:35:42.109 "data_offset": 2048, 00:35:42.109 "data_size": 63488 00:35:42.109 }, 00:35:42.109 { 00:35:42.109 "name": "pt2", 00:35:42.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:42.109 "is_configured": true, 00:35:42.109 "data_offset": 2048, 00:35:42.109 "data_size": 63488 00:35:42.109 }, 00:35:42.109 { 00:35:42.109 "name": "pt3", 00:35:42.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:42.109 "is_configured": true, 00:35:42.109 "data_offset": 2048, 00:35:42.109 "data_size": 63488 00:35:42.109 }, 00:35:42.109 { 00:35:42.109 "name": "pt4", 00:35:42.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:42.109 "is_configured": true, 00:35:42.109 "data_offset": 2048, 00:35:42.109 "data_size": 63488 00:35:42.109 } 00:35:42.109 ] 00:35:42.109 }' 00:35:42.109 01:02:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:42.109 01:02:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.677 01:02:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:42.936 [2024-07-25 01:02:05.519239] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:42.936 [2024-07-25 01:02:05.519398] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:42.936 [2024-07-25 01:02:05.519596] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:42.936 [2024-07-25 01:02:05.519695] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:42.936 [2024-07-25 01:02:05.519886] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:35:42.936 01:02:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:42.936 01:02:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:35:43.193 01:02:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:35:43.193 01:02:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:35:43.193 01:02:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:35:43.193 01:02:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:43.193 01:02:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:43.450 01:02:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:43.450 01:02:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:43.450 01:02:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:35:43.709 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:43.709 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:43.709 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:35:43.709 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:43.709 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:43.709 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:35:43.709 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:43.709 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:43.967 [2024-07-25 01:02:06.475377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:43.967 [2024-07-25 01:02:06.475629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:43.967 [2024-07-25 01:02:06.475691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:35:43.967 [2024-07-25 01:02:06.475804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:43.967 [2024-07-25 01:02:06.478171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:43.967 [2024-07-25 01:02:06.478353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:43.967 [2024-07-25 01:02:06.478589] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:43.967 [2024-07-25 01:02:06.478718] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:43.967 pt2 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:43.967 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.226 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:44.226 "name": "raid_bdev1", 00:35:44.226 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:44.226 "strip_size_kb": 64, 00:35:44.226 "state": "configuring", 00:35:44.226 "raid_level": "raid5f", 00:35:44.226 "superblock": true, 00:35:44.226 "num_base_bdevs": 4, 00:35:44.226 "num_base_bdevs_discovered": 1, 00:35:44.226 "num_base_bdevs_operational": 3, 00:35:44.226 "base_bdevs_list": [ 00:35:44.226 { 00:35:44.226 "name": null, 00:35:44.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.226 "is_configured": false, 00:35:44.226 "data_offset": 2048, 00:35:44.226 "data_size": 63488 00:35:44.226 }, 00:35:44.226 { 00:35:44.226 "name": "pt2", 00:35:44.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:44.226 "is_configured": true, 00:35:44.226 "data_offset": 2048, 00:35:44.226 "data_size": 63488 00:35:44.226 }, 00:35:44.226 { 00:35:44.226 "name": null, 00:35:44.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:44.226 "is_configured": false, 00:35:44.226 "data_offset": 2048, 00:35:44.226 "data_size": 63488 00:35:44.226 }, 00:35:44.226 { 00:35:44.226 "name": null, 00:35:44.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:44.226 "is_configured": false, 00:35:44.226 "data_offset": 2048, 00:35:44.226 "data_size": 63488 00:35:44.226 } 00:35:44.226 ] 00:35:44.226 }' 00:35:44.226 01:02:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:44.226 01:02:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.793 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:35:44.793 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:44.793 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:45.052 [2024-07-25 01:02:07.487572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:45.052 [2024-07-25 01:02:07.487846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:45.052 [2024-07-25 01:02:07.487924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:35:45.052 [2024-07-25 01:02:07.488035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:45.052 [2024-07-25 01:02:07.488511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:45.052 [2024-07-25 01:02:07.488657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:45.052 [2024-07-25 01:02:07.488866] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:45.052 [2024-07-25 01:02:07.488971] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:45.052 pt3 00:35:45.052 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:35:45.052 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:45.052 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:45.052 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:45.052 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:45.052 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:45.052 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:45.052 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:45.052 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:45.052 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:45.052 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:45.053 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:45.311 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:45.311 "name": "raid_bdev1", 00:35:45.311 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:45.311 "strip_size_kb": 64, 00:35:45.311 "state": "configuring", 00:35:45.311 "raid_level": "raid5f", 00:35:45.311 "superblock": true, 00:35:45.311 "num_base_bdevs": 4, 00:35:45.311 "num_base_bdevs_discovered": 2, 00:35:45.311 "num_base_bdevs_operational": 3, 00:35:45.311 "base_bdevs_list": [ 00:35:45.311 { 00:35:45.311 "name": null, 00:35:45.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.311 "is_configured": false, 00:35:45.311 "data_offset": 2048, 00:35:45.311 "data_size": 63488 00:35:45.311 }, 00:35:45.311 { 00:35:45.311 "name": "pt2", 00:35:45.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:45.311 "is_configured": true, 00:35:45.311 "data_offset": 2048, 00:35:45.311 "data_size": 63488 00:35:45.311 }, 00:35:45.311 { 00:35:45.311 "name": "pt3", 00:35:45.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:45.311 "is_configured": true, 00:35:45.311 "data_offset": 2048, 00:35:45.311 "data_size": 63488 00:35:45.311 }, 00:35:45.311 { 00:35:45.311 "name": null, 00:35:45.311 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:45.311 "is_configured": false, 00:35:45.311 "data_offset": 2048, 00:35:45.311 "data_size": 63488 00:35:45.311 } 00:35:45.311 ] 00:35:45.311 }' 00:35:45.311 01:02:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:45.311 01:02:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:45.916 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:35:45.916 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:45.916 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:35:45.916 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:46.174 [2024-07-25 01:02:08.607817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:46.174 [2024-07-25 01:02:08.608079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:46.174 [2024-07-25 01:02:08.608151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:35:46.174 [2024-07-25 01:02:08.608247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:46.174 [2024-07-25 01:02:08.608750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:46.174 [2024-07-25 01:02:08.608900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:46.174 [2024-07-25 01:02:08.609094] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:46.174 [2024-07-25 01:02:08.609193] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:46.174 [2024-07-25 01:02:08.609358] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:35:46.174 [2024-07-25 01:02:08.609499] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:46.174 [2024-07-25 01:02:08.609615] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:35:46.174 [2024-07-25 01:02:08.616883] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:35:46.174 [2024-07-25 01:02:08.617011] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:35:46.174 [2024-07-25 01:02:08.617383] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:46.174 pt4 00:35:46.174 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:46.174 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:46.174 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:46.175 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:46.175 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:46.175 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:46.175 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:46.175 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:46.175 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:46.175 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:46.175 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:46.175 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:46.433 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:46.433 "name": "raid_bdev1", 00:35:46.433 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:46.433 "strip_size_kb": 64, 00:35:46.433 "state": "online", 00:35:46.433 "raid_level": "raid5f", 00:35:46.433 "superblock": true, 00:35:46.433 "num_base_bdevs": 4, 00:35:46.433 "num_base_bdevs_discovered": 3, 00:35:46.433 "num_base_bdevs_operational": 3, 00:35:46.433 "base_bdevs_list": [ 00:35:46.433 { 00:35:46.433 "name": null, 00:35:46.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.433 "is_configured": false, 00:35:46.433 "data_offset": 2048, 00:35:46.433 "data_size": 63488 00:35:46.433 }, 00:35:46.433 { 00:35:46.433 "name": "pt2", 00:35:46.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:46.433 "is_configured": true, 00:35:46.433 "data_offset": 2048, 00:35:46.433 "data_size": 63488 00:35:46.433 }, 00:35:46.433 { 00:35:46.433 "name": "pt3", 00:35:46.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:46.433 "is_configured": true, 00:35:46.433 "data_offset": 2048, 00:35:46.433 "data_size": 63488 00:35:46.433 }, 00:35:46.433 { 00:35:46.433 "name": "pt4", 00:35:46.433 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:46.433 "is_configured": true, 00:35:46.433 "data_offset": 2048, 00:35:46.433 "data_size": 63488 00:35:46.433 } 00:35:46.433 ] 00:35:46.433 }' 00:35:46.433 01:02:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:46.433 01:02:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:47.001 01:02:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:47.260 [2024-07-25 01:02:09.704028] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:47.260 [2024-07-25 01:02:09.704232] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:47.260 [2024-07-25 01:02:09.704425] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:47.260 [2024-07-25 01:02:09.704522] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:47.260 [2024-07-25 01:02:09.704729] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:35:47.260 01:02:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:47.260 01:02:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:35:47.519 01:02:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:35:47.519 01:02:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:35:47.519 01:02:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:35:47.520 01:02:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:35:47.520 01:02:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:35:47.520 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:47.780 [2024-07-25 01:02:10.376150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:47.780 [2024-07-25 01:02:10.376426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:47.780 [2024-07-25 01:02:10.376494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:35:47.780 [2024-07-25 01:02:10.376609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:47.780 [2024-07-25 01:02:10.378958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:47.780 [2024-07-25 01:02:10.379127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:47.780 [2024-07-25 01:02:10.379329] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:47.780 [2024-07-25 01:02:10.379488] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:47.780 [2024-07-25 01:02:10.379670] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:35:47.780 [2024-07-25 01:02:10.379797] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:47.780 [2024-07-25 01:02:10.379844] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:35:47.780 [2024-07-25 01:02:10.379934] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:47.780 [2024-07-25 01:02:10.380224] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:47.780 pt1 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:47.780 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:48.040 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:48.040 "name": "raid_bdev1", 00:35:48.040 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:48.040 "strip_size_kb": 64, 00:35:48.040 "state": "configuring", 00:35:48.040 "raid_level": "raid5f", 00:35:48.040 "superblock": true, 00:35:48.040 "num_base_bdevs": 4, 00:35:48.040 "num_base_bdevs_discovered": 2, 00:35:48.040 "num_base_bdevs_operational": 3, 00:35:48.040 "base_bdevs_list": [ 00:35:48.040 { 00:35:48.040 "name": null, 00:35:48.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.040 "is_configured": false, 00:35:48.040 "data_offset": 2048, 00:35:48.040 "data_size": 63488 00:35:48.040 }, 00:35:48.040 { 00:35:48.040 "name": "pt2", 00:35:48.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:48.040 "is_configured": true, 00:35:48.040 "data_offset": 2048, 00:35:48.040 "data_size": 63488 00:35:48.040 }, 00:35:48.040 { 00:35:48.040 "name": "pt3", 00:35:48.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:48.040 "is_configured": true, 00:35:48.040 "data_offset": 2048, 00:35:48.040 "data_size": 63488 00:35:48.040 }, 00:35:48.040 { 00:35:48.040 "name": null, 00:35:48.040 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:48.040 "is_configured": false, 00:35:48.040 "data_offset": 2048, 00:35:48.040 "data_size": 63488 00:35:48.040 } 00:35:48.040 ] 00:35:48.040 }' 00:35:48.040 01:02:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:48.040 01:02:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:48.608 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:35:48.608 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:48.867 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:35:48.867 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:49.127 [2024-07-25 01:02:11.640536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:49.127 [2024-07-25 01:02:11.640805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:49.127 [2024-07-25 01:02:11.640873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:35:49.127 [2024-07-25 01:02:11.640992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:49.127 [2024-07-25 01:02:11.641459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:49.127 [2024-07-25 01:02:11.641605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:49.127 [2024-07-25 01:02:11.641799] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:49.127 [2024-07-25 01:02:11.641903] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:49.127 [2024-07-25 01:02:11.642110] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:35:49.127 [2024-07-25 01:02:11.642200] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:49.127 [2024-07-25 01:02:11.642386] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:35:49.127 [2024-07-25 01:02:11.649916] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:35:49.127 [2024-07-25 01:02:11.650056] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:35:49.127 [2024-07-25 01:02:11.650451] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:49.127 pt4 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.127 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:49.386 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:49.386 "name": "raid_bdev1", 00:35:49.386 "uuid": "7af6ef91-ae11-402b-ab92-001346d7129b", 00:35:49.386 "strip_size_kb": 64, 00:35:49.386 "state": "online", 00:35:49.386 "raid_level": "raid5f", 00:35:49.386 "superblock": true, 00:35:49.386 "num_base_bdevs": 4, 00:35:49.386 "num_base_bdevs_discovered": 3, 00:35:49.386 "num_base_bdevs_operational": 3, 00:35:49.386 "base_bdevs_list": [ 00:35:49.386 { 00:35:49.386 "name": null, 00:35:49.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.386 "is_configured": false, 00:35:49.386 "data_offset": 2048, 00:35:49.386 "data_size": 63488 00:35:49.386 }, 00:35:49.386 { 00:35:49.386 "name": "pt2", 00:35:49.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:49.386 "is_configured": true, 00:35:49.386 "data_offset": 2048, 00:35:49.386 "data_size": 63488 00:35:49.386 }, 00:35:49.386 { 00:35:49.386 "name": "pt3", 00:35:49.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:49.386 "is_configured": true, 00:35:49.386 "data_offset": 2048, 00:35:49.386 "data_size": 63488 00:35:49.386 }, 00:35:49.386 { 00:35:49.386 "name": "pt4", 00:35:49.386 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:49.386 "is_configured": true, 00:35:49.386 "data_offset": 2048, 00:35:49.386 "data_size": 63488 00:35:49.386 } 00:35:49.386 ] 00:35:49.386 }' 00:35:49.386 01:02:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:49.386 01:02:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:49.954 01:02:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:35:49.954 01:02:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:50.214 01:02:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:35:50.214 01:02:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:50.214 01:02:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:35:50.472 [2024-07-25 01:02:13.008935] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 7af6ef91-ae11-402b-ab92-001346d7129b '!=' 7af6ef91-ae11-402b-ab92-001346d7129b ']' 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 156670 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 156670 ']' 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 156670 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 156670 00:35:50.472 killing process with pid 156670 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 156670' 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 156670 00:35:50.472 [2024-07-25 01:02:13.057588] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:50.472 01:02:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 156670 00:35:50.472 [2024-07-25 01:02:13.057660] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:50.472 [2024-07-25 01:02:13.057731] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:50.472 [2024-07-25 01:02:13.057741] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:35:51.039 [2024-07-25 01:02:13.482111] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:52.413 ************************************ 00:35:52.413 END TEST raid5f_superblock_test 00:35:52.413 ************************************ 00:35:52.413 01:02:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:35:52.413 00:35:52.413 real 0m25.055s 00:35:52.413 user 0m44.945s 00:35:52.413 sys 0m3.704s 00:35:52.413 01:02:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:52.413 01:02:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:52.413 01:02:14 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:35:52.413 01:02:14 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:35:52.413 01:02:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:35:52.413 01:02:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:52.413 01:02:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:52.413 ************************************ 00:35:52.413 START TEST raid5f_rebuild_test 00:35:52.413 ************************************ 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 false false true 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=157499 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 157499 /var/tmp/spdk-raid.sock 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 157499 ']' 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:52.413 01:02:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:52.414 01:02:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:52.414 01:02:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:52.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:52.414 01:02:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:52.414 01:02:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:52.414 [2024-07-25 01:02:15.013018] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:35:52.414 [2024-07-25 01:02:15.013378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157499 ] 00:35:52.414 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:52.414 Zero copy mechanism will not be used. 00:35:52.672 [2024-07-25 01:02:15.174518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.931 [2024-07-25 01:02:15.373988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.931 [2024-07-25 01:02:15.561867] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:53.498 01:02:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:53.498 01:02:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:35:53.498 01:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:53.498 01:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:53.766 BaseBdev1_malloc 00:35:53.766 01:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:54.033 [2024-07-25 01:02:16.538583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:54.033 [2024-07-25 01:02:16.538923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:54.033 [2024-07-25 01:02:16.538997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:35:54.033 [2024-07-25 01:02:16.539096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:54.033 [2024-07-25 01:02:16.541464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:54.034 [2024-07-25 01:02:16.541628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:54.034 BaseBdev1 00:35:54.034 01:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:54.034 01:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:54.291 BaseBdev2_malloc 00:35:54.291 01:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:54.548 [2024-07-25 01:02:17.046861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:54.548 [2024-07-25 01:02:17.047237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:54.548 [2024-07-25 01:02:17.047325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:54.548 [2024-07-25 01:02:17.047456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:54.548 [2024-07-25 01:02:17.049998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:54.548 [2024-07-25 01:02:17.050165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:54.548 BaseBdev2 00:35:54.548 01:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:54.548 01:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:54.806 BaseBdev3_malloc 00:35:54.806 01:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:54.806 [2024-07-25 01:02:17.451251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:54.806 [2024-07-25 01:02:17.451555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:54.806 [2024-07-25 01:02:17.451626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:54.806 [2024-07-25 01:02:17.451725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:54.806 [2024-07-25 01:02:17.453979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:54.806 [2024-07-25 01:02:17.454146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:54.806 BaseBdev3 00:35:55.064 01:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:55.064 01:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:55.323 BaseBdev4_malloc 00:35:55.323 01:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:55.323 [2024-07-25 01:02:17.905811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:55.323 [2024-07-25 01:02:17.906172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:55.323 [2024-07-25 01:02:17.906258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:35:55.323 [2024-07-25 01:02:17.906379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:55.323 [2024-07-25 01:02:17.908705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:55.323 [2024-07-25 01:02:17.908876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:55.323 BaseBdev4 00:35:55.323 01:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:35:55.582 spare_malloc 00:35:55.582 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:55.840 spare_delay 00:35:55.841 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:56.099 [2024-07-25 01:02:18.646982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:56.100 [2024-07-25 01:02:18.647319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:56.100 [2024-07-25 01:02:18.647392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:56.100 [2024-07-25 01:02:18.647501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:56.100 [2024-07-25 01:02:18.649990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:56.100 [2024-07-25 01:02:18.650159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:56.100 spare 00:35:56.100 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:35:56.358 [2024-07-25 01:02:18.835122] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:56.359 [2024-07-25 01:02:18.837286] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:56.359 [2024-07-25 01:02:18.837493] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:56.359 [2024-07-25 01:02:18.837570] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:56.359 [2024-07-25 01:02:18.837760] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:35:56.359 [2024-07-25 01:02:18.837800] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:35:56.359 [2024-07-25 01:02:18.838035] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:56.359 [2024-07-25 01:02:18.845793] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:35:56.359 [2024-07-25 01:02:18.845924] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:35:56.359 [2024-07-25 01:02:18.846296] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:56.359 01:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:56.618 01:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:56.618 "name": "raid_bdev1", 00:35:56.618 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:35:56.618 "strip_size_kb": 64, 00:35:56.618 "state": "online", 00:35:56.618 "raid_level": "raid5f", 00:35:56.618 "superblock": false, 00:35:56.618 "num_base_bdevs": 4, 00:35:56.618 "num_base_bdevs_discovered": 4, 00:35:56.618 "num_base_bdevs_operational": 4, 00:35:56.618 "base_bdevs_list": [ 00:35:56.618 { 00:35:56.618 "name": "BaseBdev1", 00:35:56.618 "uuid": "9869172f-cb8d-5b6f-b59f-3eb2bcb7e3be", 00:35:56.618 "is_configured": true, 00:35:56.618 "data_offset": 0, 00:35:56.618 "data_size": 65536 00:35:56.618 }, 00:35:56.618 { 00:35:56.618 "name": "BaseBdev2", 00:35:56.618 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:35:56.618 "is_configured": true, 00:35:56.618 "data_offset": 0, 00:35:56.618 "data_size": 65536 00:35:56.618 }, 00:35:56.618 { 00:35:56.618 "name": "BaseBdev3", 00:35:56.618 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:35:56.618 "is_configured": true, 00:35:56.618 "data_offset": 0, 00:35:56.618 "data_size": 65536 00:35:56.618 }, 00:35:56.618 { 00:35:56.618 "name": "BaseBdev4", 00:35:56.618 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:35:56.618 "is_configured": true, 00:35:56.618 "data_offset": 0, 00:35:56.618 "data_size": 65536 00:35:56.618 } 00:35:56.618 ] 00:35:56.618 }' 00:35:56.618 01:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:56.618 01:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.186 01:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:57.186 01:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:35:57.445 [2024-07-25 01:02:19.875153] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:57.445 01:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=196608 00:35:57.445 01:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.445 01:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:57.445 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:57.703 [2024-07-25 01:02:20.263131] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:35:57.703 /dev/nbd0 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:57.703 1+0 records in 00:35:57.703 1+0 records out 00:35:57.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496854 s, 8.2 MB/s 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:57.703 01:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:35:57.704 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:57.704 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:57.704 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:35:57.704 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:35:57.704 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 192 00:35:57.704 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:35:58.270 512+0 records in 00:35:58.270 512+0 records out 00:35:58.270 100663296 bytes (101 MB, 96 MiB) copied, 0.478192 s, 211 MB/s 00:35:58.270 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:58.270 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:58.270 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:58.270 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:58.270 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:35:58.270 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:58.270 01:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:58.529 [2024-07-25 01:02:21.036090] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:58.529 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:58.529 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:58.529 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:58.529 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:58.529 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:58.529 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:58.529 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:58.529 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:58.529 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:58.787 [2024-07-25 01:02:21.302048] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.787 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:59.047 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:59.047 "name": "raid_bdev1", 00:35:59.047 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:35:59.047 "strip_size_kb": 64, 00:35:59.047 "state": "online", 00:35:59.047 "raid_level": "raid5f", 00:35:59.047 "superblock": false, 00:35:59.047 "num_base_bdevs": 4, 00:35:59.047 "num_base_bdevs_discovered": 3, 00:35:59.047 "num_base_bdevs_operational": 3, 00:35:59.047 "base_bdevs_list": [ 00:35:59.047 { 00:35:59.047 "name": null, 00:35:59.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:59.047 "is_configured": false, 00:35:59.047 "data_offset": 0, 00:35:59.047 "data_size": 65536 00:35:59.047 }, 00:35:59.047 { 00:35:59.047 "name": "BaseBdev2", 00:35:59.047 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:35:59.047 "is_configured": true, 00:35:59.047 "data_offset": 0, 00:35:59.047 "data_size": 65536 00:35:59.047 }, 00:35:59.047 { 00:35:59.047 "name": "BaseBdev3", 00:35:59.047 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:35:59.047 "is_configured": true, 00:35:59.047 "data_offset": 0, 00:35:59.047 "data_size": 65536 00:35:59.047 }, 00:35:59.047 { 00:35:59.047 "name": "BaseBdev4", 00:35:59.047 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:35:59.047 "is_configured": true, 00:35:59.047 "data_offset": 0, 00:35:59.047 "data_size": 65536 00:35:59.047 } 00:35:59.047 ] 00:35:59.047 }' 00:35:59.047 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:59.047 01:02:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.613 01:02:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:59.613 [2024-07-25 01:02:22.254222] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:59.872 [2024-07-25 01:02:22.269915] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:35:59.872 [2024-07-25 01:02:22.280075] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:59.872 01:02:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:36:00.834 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:00.834 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:00.834 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:00.834 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:00.834 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:00.834 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:00.834 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:01.093 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:01.093 "name": "raid_bdev1", 00:36:01.093 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:01.093 "strip_size_kb": 64, 00:36:01.093 "state": "online", 00:36:01.093 "raid_level": "raid5f", 00:36:01.093 "superblock": false, 00:36:01.093 "num_base_bdevs": 4, 00:36:01.093 "num_base_bdevs_discovered": 4, 00:36:01.093 "num_base_bdevs_operational": 4, 00:36:01.093 "process": { 00:36:01.093 "type": "rebuild", 00:36:01.093 "target": "spare", 00:36:01.093 "progress": { 00:36:01.093 "blocks": 23040, 00:36:01.093 "percent": 11 00:36:01.093 } 00:36:01.093 }, 00:36:01.093 "base_bdevs_list": [ 00:36:01.093 { 00:36:01.093 "name": "spare", 00:36:01.093 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:01.093 "is_configured": true, 00:36:01.093 "data_offset": 0, 00:36:01.093 "data_size": 65536 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "name": "BaseBdev2", 00:36:01.093 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:01.093 "is_configured": true, 00:36:01.093 "data_offset": 0, 00:36:01.093 "data_size": 65536 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "name": "BaseBdev3", 00:36:01.093 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:01.093 "is_configured": true, 00:36:01.093 "data_offset": 0, 00:36:01.093 "data_size": 65536 00:36:01.093 }, 00:36:01.093 { 00:36:01.093 "name": "BaseBdev4", 00:36:01.093 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:01.093 "is_configured": true, 00:36:01.093 "data_offset": 0, 00:36:01.093 "data_size": 65536 00:36:01.093 } 00:36:01.093 ] 00:36:01.093 }' 00:36:01.093 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:01.093 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:01.093 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:01.093 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:01.093 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:01.353 [2024-07-25 01:02:23.881707] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:01.353 [2024-07-25 01:02:23.892658] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:01.353 [2024-07-25 01:02:23.892878] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:01.353 [2024-07-25 01:02:23.892929] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:01.353 [2024-07-25 01:02:23.893003] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:01.353 01:02:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:01.612 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:01.612 "name": "raid_bdev1", 00:36:01.612 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:01.612 "strip_size_kb": 64, 00:36:01.612 "state": "online", 00:36:01.612 "raid_level": "raid5f", 00:36:01.612 "superblock": false, 00:36:01.612 "num_base_bdevs": 4, 00:36:01.612 "num_base_bdevs_discovered": 3, 00:36:01.612 "num_base_bdevs_operational": 3, 00:36:01.612 "base_bdevs_list": [ 00:36:01.612 { 00:36:01.612 "name": null, 00:36:01.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:01.612 "is_configured": false, 00:36:01.612 "data_offset": 0, 00:36:01.612 "data_size": 65536 00:36:01.612 }, 00:36:01.612 { 00:36:01.612 "name": "BaseBdev2", 00:36:01.612 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:01.612 "is_configured": true, 00:36:01.612 "data_offset": 0, 00:36:01.612 "data_size": 65536 00:36:01.612 }, 00:36:01.612 { 00:36:01.612 "name": "BaseBdev3", 00:36:01.612 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:01.612 "is_configured": true, 00:36:01.612 "data_offset": 0, 00:36:01.612 "data_size": 65536 00:36:01.612 }, 00:36:01.612 { 00:36:01.612 "name": "BaseBdev4", 00:36:01.612 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:01.612 "is_configured": true, 00:36:01.612 "data_offset": 0, 00:36:01.612 "data_size": 65536 00:36:01.612 } 00:36:01.612 ] 00:36:01.612 }' 00:36:01.612 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:01.612 01:02:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.179 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:02.179 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:02.180 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:02.180 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:02.180 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:02.180 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:02.180 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:02.439 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:02.439 "name": "raid_bdev1", 00:36:02.439 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:02.439 "strip_size_kb": 64, 00:36:02.439 "state": "online", 00:36:02.439 "raid_level": "raid5f", 00:36:02.439 "superblock": false, 00:36:02.439 "num_base_bdevs": 4, 00:36:02.439 "num_base_bdevs_discovered": 3, 00:36:02.439 "num_base_bdevs_operational": 3, 00:36:02.439 "base_bdevs_list": [ 00:36:02.439 { 00:36:02.439 "name": null, 00:36:02.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.439 "is_configured": false, 00:36:02.439 "data_offset": 0, 00:36:02.439 "data_size": 65536 00:36:02.439 }, 00:36:02.439 { 00:36:02.439 "name": "BaseBdev2", 00:36:02.439 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:02.439 "is_configured": true, 00:36:02.439 "data_offset": 0, 00:36:02.439 "data_size": 65536 00:36:02.439 }, 00:36:02.439 { 00:36:02.439 "name": "BaseBdev3", 00:36:02.439 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:02.439 "is_configured": true, 00:36:02.439 "data_offset": 0, 00:36:02.439 "data_size": 65536 00:36:02.439 }, 00:36:02.439 { 00:36:02.439 "name": "BaseBdev4", 00:36:02.439 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:02.439 "is_configured": true, 00:36:02.439 "data_offset": 0, 00:36:02.439 "data_size": 65536 00:36:02.439 } 00:36:02.439 ] 00:36:02.439 }' 00:36:02.439 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:02.439 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:02.439 01:02:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:02.439 01:02:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:02.439 01:02:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:02.698 [2024-07-25 01:02:25.250225] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:02.698 [2024-07-25 01:02:25.263828] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:36:02.698 [2024-07-25 01:02:25.272905] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:02.698 01:02:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:03.635 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:03.635 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:03.635 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:03.635 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:03.635 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:03.894 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:03.894 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:04.153 "name": "raid_bdev1", 00:36:04.153 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:04.153 "strip_size_kb": 64, 00:36:04.153 "state": "online", 00:36:04.153 "raid_level": "raid5f", 00:36:04.153 "superblock": false, 00:36:04.153 "num_base_bdevs": 4, 00:36:04.153 "num_base_bdevs_discovered": 4, 00:36:04.153 "num_base_bdevs_operational": 4, 00:36:04.153 "process": { 00:36:04.153 "type": "rebuild", 00:36:04.153 "target": "spare", 00:36:04.153 "progress": { 00:36:04.153 "blocks": 23040, 00:36:04.153 "percent": 11 00:36:04.153 } 00:36:04.153 }, 00:36:04.153 "base_bdevs_list": [ 00:36:04.153 { 00:36:04.153 "name": "spare", 00:36:04.153 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:04.153 "is_configured": true, 00:36:04.153 "data_offset": 0, 00:36:04.153 "data_size": 65536 00:36:04.153 }, 00:36:04.153 { 00:36:04.153 "name": "BaseBdev2", 00:36:04.153 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:04.153 "is_configured": true, 00:36:04.153 "data_offset": 0, 00:36:04.153 "data_size": 65536 00:36:04.153 }, 00:36:04.153 { 00:36:04.153 "name": "BaseBdev3", 00:36:04.153 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:04.153 "is_configured": true, 00:36:04.153 "data_offset": 0, 00:36:04.153 "data_size": 65536 00:36:04.153 }, 00:36:04.153 { 00:36:04.153 "name": "BaseBdev4", 00:36:04.153 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:04.153 "is_configured": true, 00:36:04.153 "data_offset": 0, 00:36:04.153 "data_size": 65536 00:36:04.153 } 00:36:04.153 ] 00:36:04.153 }' 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1223 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:04.153 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:04.413 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:04.413 "name": "raid_bdev1", 00:36:04.413 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:04.413 "strip_size_kb": 64, 00:36:04.413 "state": "online", 00:36:04.413 "raid_level": "raid5f", 00:36:04.413 "superblock": false, 00:36:04.413 "num_base_bdevs": 4, 00:36:04.413 "num_base_bdevs_discovered": 4, 00:36:04.413 "num_base_bdevs_operational": 4, 00:36:04.413 "process": { 00:36:04.413 "type": "rebuild", 00:36:04.413 "target": "spare", 00:36:04.413 "progress": { 00:36:04.413 "blocks": 28800, 00:36:04.413 "percent": 14 00:36:04.413 } 00:36:04.413 }, 00:36:04.413 "base_bdevs_list": [ 00:36:04.413 { 00:36:04.413 "name": "spare", 00:36:04.413 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:04.413 "is_configured": true, 00:36:04.413 "data_offset": 0, 00:36:04.413 "data_size": 65536 00:36:04.413 }, 00:36:04.413 { 00:36:04.413 "name": "BaseBdev2", 00:36:04.413 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:04.413 "is_configured": true, 00:36:04.413 "data_offset": 0, 00:36:04.413 "data_size": 65536 00:36:04.413 }, 00:36:04.413 { 00:36:04.413 "name": "BaseBdev3", 00:36:04.413 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:04.413 "is_configured": true, 00:36:04.413 "data_offset": 0, 00:36:04.413 "data_size": 65536 00:36:04.413 }, 00:36:04.413 { 00:36:04.413 "name": "BaseBdev4", 00:36:04.413 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:04.413 "is_configured": true, 00:36:04.413 "data_offset": 0, 00:36:04.413 "data_size": 65536 00:36:04.413 } 00:36:04.413 ] 00:36:04.413 }' 00:36:04.413 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:04.413 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:04.413 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:04.413 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:04.413 01:02:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:05.350 01:02:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:05.350 01:02:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:05.350 01:02:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:05.350 01:02:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:05.350 01:02:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:05.350 01:02:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:05.350 01:02:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:05.350 01:02:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:05.609 01:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:05.609 "name": "raid_bdev1", 00:36:05.609 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:05.609 "strip_size_kb": 64, 00:36:05.609 "state": "online", 00:36:05.609 "raid_level": "raid5f", 00:36:05.609 "superblock": false, 00:36:05.609 "num_base_bdevs": 4, 00:36:05.609 "num_base_bdevs_discovered": 4, 00:36:05.609 "num_base_bdevs_operational": 4, 00:36:05.609 "process": { 00:36:05.609 "type": "rebuild", 00:36:05.609 "target": "spare", 00:36:05.609 "progress": { 00:36:05.609 "blocks": 55680, 00:36:05.609 "percent": 28 00:36:05.609 } 00:36:05.609 }, 00:36:05.609 "base_bdevs_list": [ 00:36:05.609 { 00:36:05.609 "name": "spare", 00:36:05.609 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:05.609 "is_configured": true, 00:36:05.609 "data_offset": 0, 00:36:05.609 "data_size": 65536 00:36:05.609 }, 00:36:05.609 { 00:36:05.609 "name": "BaseBdev2", 00:36:05.609 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:05.609 "is_configured": true, 00:36:05.609 "data_offset": 0, 00:36:05.609 "data_size": 65536 00:36:05.609 }, 00:36:05.609 { 00:36:05.609 "name": "BaseBdev3", 00:36:05.609 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:05.609 "is_configured": true, 00:36:05.609 "data_offset": 0, 00:36:05.609 "data_size": 65536 00:36:05.609 }, 00:36:05.609 { 00:36:05.609 "name": "BaseBdev4", 00:36:05.609 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:05.609 "is_configured": true, 00:36:05.609 "data_offset": 0, 00:36:05.609 "data_size": 65536 00:36:05.609 } 00:36:05.609 ] 00:36:05.609 }' 00:36:05.609 01:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:05.609 01:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:05.868 01:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:05.868 01:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:05.868 01:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:06.804 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:06.804 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:06.804 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:06.804 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:06.804 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:06.804 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:06.804 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:06.804 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:07.063 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:07.063 "name": "raid_bdev1", 00:36:07.063 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:07.063 "strip_size_kb": 64, 00:36:07.063 "state": "online", 00:36:07.063 "raid_level": "raid5f", 00:36:07.063 "superblock": false, 00:36:07.063 "num_base_bdevs": 4, 00:36:07.063 "num_base_bdevs_discovered": 4, 00:36:07.063 "num_base_bdevs_operational": 4, 00:36:07.063 "process": { 00:36:07.063 "type": "rebuild", 00:36:07.063 "target": "spare", 00:36:07.063 "progress": { 00:36:07.063 "blocks": 80640, 00:36:07.063 "percent": 41 00:36:07.063 } 00:36:07.063 }, 00:36:07.063 "base_bdevs_list": [ 00:36:07.063 { 00:36:07.063 "name": "spare", 00:36:07.063 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:07.063 "is_configured": true, 00:36:07.063 "data_offset": 0, 00:36:07.063 "data_size": 65536 00:36:07.063 }, 00:36:07.063 { 00:36:07.063 "name": "BaseBdev2", 00:36:07.063 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:07.063 "is_configured": true, 00:36:07.063 "data_offset": 0, 00:36:07.063 "data_size": 65536 00:36:07.063 }, 00:36:07.063 { 00:36:07.063 "name": "BaseBdev3", 00:36:07.063 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:07.063 "is_configured": true, 00:36:07.063 "data_offset": 0, 00:36:07.063 "data_size": 65536 00:36:07.063 }, 00:36:07.063 { 00:36:07.063 "name": "BaseBdev4", 00:36:07.063 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:07.063 "is_configured": true, 00:36:07.063 "data_offset": 0, 00:36:07.063 "data_size": 65536 00:36:07.063 } 00:36:07.063 ] 00:36:07.063 }' 00:36:07.063 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:07.063 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:07.063 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:07.063 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:07.063 01:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:08.008 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:08.008 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:08.008 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:08.008 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:08.008 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:08.008 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:08.009 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:08.009 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:08.274 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:08.274 "name": "raid_bdev1", 00:36:08.274 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:08.274 "strip_size_kb": 64, 00:36:08.274 "state": "online", 00:36:08.274 "raid_level": "raid5f", 00:36:08.274 "superblock": false, 00:36:08.274 "num_base_bdevs": 4, 00:36:08.274 "num_base_bdevs_discovered": 4, 00:36:08.274 "num_base_bdevs_operational": 4, 00:36:08.274 "process": { 00:36:08.274 "type": "rebuild", 00:36:08.274 "target": "spare", 00:36:08.274 "progress": { 00:36:08.274 "blocks": 105600, 00:36:08.274 "percent": 53 00:36:08.274 } 00:36:08.274 }, 00:36:08.274 "base_bdevs_list": [ 00:36:08.274 { 00:36:08.274 "name": "spare", 00:36:08.274 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:08.274 "is_configured": true, 00:36:08.274 "data_offset": 0, 00:36:08.274 "data_size": 65536 00:36:08.274 }, 00:36:08.274 { 00:36:08.274 "name": "BaseBdev2", 00:36:08.274 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:08.274 "is_configured": true, 00:36:08.274 "data_offset": 0, 00:36:08.274 "data_size": 65536 00:36:08.274 }, 00:36:08.274 { 00:36:08.274 "name": "BaseBdev3", 00:36:08.274 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:08.274 "is_configured": true, 00:36:08.274 "data_offset": 0, 00:36:08.274 "data_size": 65536 00:36:08.274 }, 00:36:08.274 { 00:36:08.274 "name": "BaseBdev4", 00:36:08.274 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:08.274 "is_configured": true, 00:36:08.275 "data_offset": 0, 00:36:08.275 "data_size": 65536 00:36:08.275 } 00:36:08.275 ] 00:36:08.275 }' 00:36:08.275 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:08.275 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:08.275 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:08.533 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:08.533 01:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:09.467 01:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:09.467 01:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:09.467 01:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:09.467 01:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:09.467 01:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:09.467 01:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:09.467 01:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:09.468 01:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:09.727 01:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:09.727 "name": "raid_bdev1", 00:36:09.727 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:09.727 "strip_size_kb": 64, 00:36:09.727 "state": "online", 00:36:09.727 "raid_level": "raid5f", 00:36:09.727 "superblock": false, 00:36:09.727 "num_base_bdevs": 4, 00:36:09.727 "num_base_bdevs_discovered": 4, 00:36:09.727 "num_base_bdevs_operational": 4, 00:36:09.727 "process": { 00:36:09.727 "type": "rebuild", 00:36:09.727 "target": "spare", 00:36:09.727 "progress": { 00:36:09.727 "blocks": 130560, 00:36:09.727 "percent": 66 00:36:09.727 } 00:36:09.727 }, 00:36:09.727 "base_bdevs_list": [ 00:36:09.727 { 00:36:09.727 "name": "spare", 00:36:09.727 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:09.727 "is_configured": true, 00:36:09.727 "data_offset": 0, 00:36:09.727 "data_size": 65536 00:36:09.727 }, 00:36:09.727 { 00:36:09.727 "name": "BaseBdev2", 00:36:09.727 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:09.727 "is_configured": true, 00:36:09.727 "data_offset": 0, 00:36:09.727 "data_size": 65536 00:36:09.727 }, 00:36:09.727 { 00:36:09.727 "name": "BaseBdev3", 00:36:09.727 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:09.727 "is_configured": true, 00:36:09.727 "data_offset": 0, 00:36:09.727 "data_size": 65536 00:36:09.727 }, 00:36:09.727 { 00:36:09.727 "name": "BaseBdev4", 00:36:09.727 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:09.727 "is_configured": true, 00:36:09.727 "data_offset": 0, 00:36:09.727 "data_size": 65536 00:36:09.727 } 00:36:09.727 ] 00:36:09.727 }' 00:36:09.727 01:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:09.727 01:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:09.727 01:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:09.727 01:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:09.727 01:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:10.664 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:10.664 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:10.664 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:10.664 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:10.664 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:10.664 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:10.664 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:10.664 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:10.924 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:10.924 "name": "raid_bdev1", 00:36:10.924 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:10.924 "strip_size_kb": 64, 00:36:10.924 "state": "online", 00:36:10.924 "raid_level": "raid5f", 00:36:10.924 "superblock": false, 00:36:10.924 "num_base_bdevs": 4, 00:36:10.924 "num_base_bdevs_discovered": 4, 00:36:10.924 "num_base_bdevs_operational": 4, 00:36:10.924 "process": { 00:36:10.924 "type": "rebuild", 00:36:10.924 "target": "spare", 00:36:10.924 "progress": { 00:36:10.924 "blocks": 155520, 00:36:10.924 "percent": 79 00:36:10.924 } 00:36:10.924 }, 00:36:10.924 "base_bdevs_list": [ 00:36:10.924 { 00:36:10.924 "name": "spare", 00:36:10.924 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:10.924 "is_configured": true, 00:36:10.924 "data_offset": 0, 00:36:10.924 "data_size": 65536 00:36:10.924 }, 00:36:10.924 { 00:36:10.924 "name": "BaseBdev2", 00:36:10.924 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:10.924 "is_configured": true, 00:36:10.924 "data_offset": 0, 00:36:10.924 "data_size": 65536 00:36:10.924 }, 00:36:10.924 { 00:36:10.924 "name": "BaseBdev3", 00:36:10.924 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:10.924 "is_configured": true, 00:36:10.924 "data_offset": 0, 00:36:10.924 "data_size": 65536 00:36:10.924 }, 00:36:10.924 { 00:36:10.924 "name": "BaseBdev4", 00:36:10.924 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:10.924 "is_configured": true, 00:36:10.924 "data_offset": 0, 00:36:10.924 "data_size": 65536 00:36:10.924 } 00:36:10.924 ] 00:36:10.924 }' 00:36:10.924 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:10.924 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:10.924 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:11.182 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:11.182 01:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:12.119 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:12.119 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:12.119 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:12.119 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:12.119 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:12.119 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:12.119 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:12.119 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:12.378 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:12.378 "name": "raid_bdev1", 00:36:12.378 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:12.378 "strip_size_kb": 64, 00:36:12.378 "state": "online", 00:36:12.378 "raid_level": "raid5f", 00:36:12.378 "superblock": false, 00:36:12.378 "num_base_bdevs": 4, 00:36:12.378 "num_base_bdevs_discovered": 4, 00:36:12.378 "num_base_bdevs_operational": 4, 00:36:12.378 "process": { 00:36:12.378 "type": "rebuild", 00:36:12.378 "target": "spare", 00:36:12.378 "progress": { 00:36:12.378 "blocks": 182400, 00:36:12.378 "percent": 92 00:36:12.378 } 00:36:12.378 }, 00:36:12.378 "base_bdevs_list": [ 00:36:12.378 { 00:36:12.378 "name": "spare", 00:36:12.378 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:12.378 "is_configured": true, 00:36:12.378 "data_offset": 0, 00:36:12.378 "data_size": 65536 00:36:12.378 }, 00:36:12.378 { 00:36:12.378 "name": "BaseBdev2", 00:36:12.378 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:12.378 "is_configured": true, 00:36:12.378 "data_offset": 0, 00:36:12.378 "data_size": 65536 00:36:12.378 }, 00:36:12.378 { 00:36:12.378 "name": "BaseBdev3", 00:36:12.378 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:12.378 "is_configured": true, 00:36:12.378 "data_offset": 0, 00:36:12.378 "data_size": 65536 00:36:12.378 }, 00:36:12.378 { 00:36:12.378 "name": "BaseBdev4", 00:36:12.378 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:12.378 "is_configured": true, 00:36:12.378 "data_offset": 0, 00:36:12.378 "data_size": 65536 00:36:12.378 } 00:36:12.378 ] 00:36:12.378 }' 00:36:12.378 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:12.378 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:12.378 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:12.378 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:12.378 01:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:13.314 [2024-07-25 01:02:35.643126] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:13.314 [2024-07-25 01:02:35.643405] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:13.314 [2024-07-25 01:02:35.643605] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:13.314 01:02:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:13.314 01:02:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:13.314 01:02:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:13.314 01:02:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:13.314 01:02:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:13.314 01:02:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:13.573 01:02:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:13.573 01:02:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:13.573 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:13.573 "name": "raid_bdev1", 00:36:13.573 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:13.573 "strip_size_kb": 64, 00:36:13.573 "state": "online", 00:36:13.573 "raid_level": "raid5f", 00:36:13.573 "superblock": false, 00:36:13.573 "num_base_bdevs": 4, 00:36:13.573 "num_base_bdevs_discovered": 4, 00:36:13.573 "num_base_bdevs_operational": 4, 00:36:13.573 "base_bdevs_list": [ 00:36:13.573 { 00:36:13.573 "name": "spare", 00:36:13.573 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:13.573 "is_configured": true, 00:36:13.573 "data_offset": 0, 00:36:13.573 "data_size": 65536 00:36:13.573 }, 00:36:13.573 { 00:36:13.573 "name": "BaseBdev2", 00:36:13.573 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:13.573 "is_configured": true, 00:36:13.573 "data_offset": 0, 00:36:13.573 "data_size": 65536 00:36:13.573 }, 00:36:13.573 { 00:36:13.573 "name": "BaseBdev3", 00:36:13.573 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:13.573 "is_configured": true, 00:36:13.573 "data_offset": 0, 00:36:13.573 "data_size": 65536 00:36:13.573 }, 00:36:13.573 { 00:36:13.573 "name": "BaseBdev4", 00:36:13.573 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:13.573 "is_configured": true, 00:36:13.573 "data_offset": 0, 00:36:13.573 "data_size": 65536 00:36:13.573 } 00:36:13.573 ] 00:36:13.573 }' 00:36:13.573 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:13.832 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:13.832 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:13.832 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:36:13.832 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:36:13.832 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:13.832 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:13.832 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:13.832 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:13.832 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:13.832 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:13.832 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:14.090 "name": "raid_bdev1", 00:36:14.090 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:14.090 "strip_size_kb": 64, 00:36:14.090 "state": "online", 00:36:14.090 "raid_level": "raid5f", 00:36:14.090 "superblock": false, 00:36:14.090 "num_base_bdevs": 4, 00:36:14.090 "num_base_bdevs_discovered": 4, 00:36:14.090 "num_base_bdevs_operational": 4, 00:36:14.090 "base_bdevs_list": [ 00:36:14.090 { 00:36:14.090 "name": "spare", 00:36:14.090 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:14.090 "is_configured": true, 00:36:14.090 "data_offset": 0, 00:36:14.090 "data_size": 65536 00:36:14.090 }, 00:36:14.090 { 00:36:14.090 "name": "BaseBdev2", 00:36:14.090 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:14.090 "is_configured": true, 00:36:14.090 "data_offset": 0, 00:36:14.090 "data_size": 65536 00:36:14.090 }, 00:36:14.090 { 00:36:14.090 "name": "BaseBdev3", 00:36:14.090 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:14.090 "is_configured": true, 00:36:14.090 "data_offset": 0, 00:36:14.090 "data_size": 65536 00:36:14.090 }, 00:36:14.090 { 00:36:14.090 "name": "BaseBdev4", 00:36:14.090 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:14.090 "is_configured": true, 00:36:14.090 "data_offset": 0, 00:36:14.090 "data_size": 65536 00:36:14.090 } 00:36:14.090 ] 00:36:14.090 }' 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:14.090 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:14.347 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:14.347 "name": "raid_bdev1", 00:36:14.347 "uuid": "e2fd56e0-c876-4825-a4b5-442f946b5a70", 00:36:14.347 "strip_size_kb": 64, 00:36:14.347 "state": "online", 00:36:14.347 "raid_level": "raid5f", 00:36:14.347 "superblock": false, 00:36:14.347 "num_base_bdevs": 4, 00:36:14.347 "num_base_bdevs_discovered": 4, 00:36:14.347 "num_base_bdevs_operational": 4, 00:36:14.347 "base_bdevs_list": [ 00:36:14.347 { 00:36:14.347 "name": "spare", 00:36:14.347 "uuid": "637d2277-6e8c-5f0b-8496-c758e16688f5", 00:36:14.347 "is_configured": true, 00:36:14.347 "data_offset": 0, 00:36:14.347 "data_size": 65536 00:36:14.347 }, 00:36:14.347 { 00:36:14.347 "name": "BaseBdev2", 00:36:14.347 "uuid": "765d51cf-7e1d-5e62-b939-230a16776fd7", 00:36:14.347 "is_configured": true, 00:36:14.347 "data_offset": 0, 00:36:14.347 "data_size": 65536 00:36:14.347 }, 00:36:14.347 { 00:36:14.347 "name": "BaseBdev3", 00:36:14.347 "uuid": "debd27f7-ada6-513e-bfd1-b0962e43e2d3", 00:36:14.347 "is_configured": true, 00:36:14.348 "data_offset": 0, 00:36:14.348 "data_size": 65536 00:36:14.348 }, 00:36:14.348 { 00:36:14.348 "name": "BaseBdev4", 00:36:14.348 "uuid": "33f73105-402a-54ff-ab46-49e75d27c9a6", 00:36:14.348 "is_configured": true, 00:36:14.348 "data_offset": 0, 00:36:14.348 "data_size": 65536 00:36:14.348 } 00:36:14.348 ] 00:36:14.348 }' 00:36:14.348 01:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:14.348 01:02:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.912 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:14.912 [2024-07-25 01:02:37.559683] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:14.912 [2024-07-25 01:02:37.559879] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:14.912 [2024-07-25 01:02:37.560085] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:14.912 [2024-07-25 01:02:37.560205] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:14.912 [2024-07-25 01:02:37.560392] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:15.168 01:02:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:15.425 /dev/nbd0 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:15.425 1+0 records in 00:36:15.425 1+0 records out 00:36:15.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498699 s, 8.2 MB/s 00:36:15.425 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:15.682 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:36:15.682 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:15.682 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:15.682 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:36:15.682 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:15.682 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:15.682 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:36:15.940 /dev/nbd1 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:15.940 1+0 records in 00:36:15.940 1+0 records out 00:36:15.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458486 s, 8.9 MB/s 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:15.940 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:16.199 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:16.199 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:16.199 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:16.199 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:16.199 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:16.199 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:16.199 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:36:16.199 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:36:16.199 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:16.199 01:02:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 157499 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 157499 ']' 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 157499 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 157499 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:16.457 killing process with pid 157499 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 157499' 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 157499 00:36:16.457 Received shutdown signal, test time was about 60.000000 seconds 00:36:16.457 00:36:16.457 Latency(us) 00:36:16.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.457 =================================================================================================================== 00:36:16.457 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:16.457 [2024-07-25 01:02:39.064249] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:16.457 01:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 157499 00:36:17.024 [2024-07-25 01:02:39.509316] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:36:18.397 00:36:18.397 real 0m25.762s 00:36:18.397 user 0m37.001s 00:36:18.397 sys 0m3.177s 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.397 ************************************ 00:36:18.397 END TEST raid5f_rebuild_test 00:36:18.397 ************************************ 00:36:18.397 01:02:40 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:36:18.397 01:02:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:36:18.397 01:02:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:18.397 01:02:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:18.397 ************************************ 00:36:18.397 START TEST raid5f_rebuild_test_sb 00:36:18.397 ************************************ 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 true false true 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=158128 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 158128 /var/tmp/spdk-raid.sock 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 158128 ']' 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:18.397 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:18.398 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:18.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:18.398 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:18.398 01:02:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:18.398 [2024-07-25 01:02:40.861052] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:36:18.398 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:18.398 Zero copy mechanism will not be used. 00:36:18.398 [2024-07-25 01:02:40.861247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158128 ] 00:36:18.398 [2024-07-25 01:02:41.039230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.655 [2024-07-25 01:02:41.218885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.913 [2024-07-25 01:02:41.408203] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:19.171 01:02:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:19.171 01:02:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:36:19.171 01:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:19.171 01:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:19.429 BaseBdev1_malloc 00:36:19.429 01:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:19.686 [2024-07-25 01:02:42.117269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:19.686 [2024-07-25 01:02:42.117370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:19.686 [2024-07-25 01:02:42.117417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:36:19.686 [2024-07-25 01:02:42.117437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:19.686 [2024-07-25 01:02:42.119699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:19.686 [2024-07-25 01:02:42.119766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:19.687 BaseBdev1 00:36:19.687 01:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:19.687 01:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:19.944 BaseBdev2_malloc 00:36:19.944 01:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:20.203 [2024-07-25 01:02:42.604712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:20.203 [2024-07-25 01:02:42.604830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.203 [2024-07-25 01:02:42.604869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:36:20.203 [2024-07-25 01:02:42.604890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.203 [2024-07-25 01:02:42.607129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.203 [2024-07-25 01:02:42.607194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:20.203 BaseBdev2 00:36:20.203 01:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:20.203 01:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:20.203 BaseBdev3_malloc 00:36:20.203 01:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:36:20.461 [2024-07-25 01:02:43.062804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:36:20.461 [2024-07-25 01:02:43.062923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.461 [2024-07-25 01:02:43.062960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:20.461 [2024-07-25 01:02:43.062984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.461 [2024-07-25 01:02:43.065216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.461 [2024-07-25 01:02:43.065275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:20.461 BaseBdev3 00:36:20.461 01:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:20.461 01:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:36:20.893 BaseBdev4_malloc 00:36:20.893 01:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:36:20.893 [2024-07-25 01:02:43.460498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:36:20.893 [2024-07-25 01:02:43.460603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.893 [2024-07-25 01:02:43.460638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:36:20.893 [2024-07-25 01:02:43.460663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.893 [2024-07-25 01:02:43.462933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.893 [2024-07-25 01:02:43.462986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:36:20.893 BaseBdev4 00:36:20.893 01:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:36:21.150 spare_malloc 00:36:21.150 01:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:21.407 spare_delay 00:36:21.407 01:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:21.407 [2024-07-25 01:02:44.038694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:21.407 [2024-07-25 01:02:44.038786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:21.407 [2024-07-25 01:02:44.038833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:21.407 [2024-07-25 01:02:44.038863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:21.407 [2024-07-25 01:02:44.041122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:21.407 [2024-07-25 01:02:44.041178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:21.407 spare 00:36:21.407 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:36:21.665 [2024-07-25 01:02:44.230794] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:21.665 [2024-07-25 01:02:44.232770] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:21.665 [2024-07-25 01:02:44.232858] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:21.665 [2024-07-25 01:02:44.232899] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:21.665 [2024-07-25 01:02:44.233104] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:36:21.665 [2024-07-25 01:02:44.233114] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:21.665 [2024-07-25 01:02:44.233212] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:21.665 [2024-07-25 01:02:44.240389] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:36:21.665 [2024-07-25 01:02:44.240414] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:36:21.665 [2024-07-25 01:02:44.240575] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:21.665 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:21.923 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:21.923 "name": "raid_bdev1", 00:36:21.923 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:21.923 "strip_size_kb": 64, 00:36:21.923 "state": "online", 00:36:21.923 "raid_level": "raid5f", 00:36:21.923 "superblock": true, 00:36:21.923 "num_base_bdevs": 4, 00:36:21.923 "num_base_bdevs_discovered": 4, 00:36:21.923 "num_base_bdevs_operational": 4, 00:36:21.923 "base_bdevs_list": [ 00:36:21.923 { 00:36:21.923 "name": "BaseBdev1", 00:36:21.923 "uuid": "cb9fdf27-cfa2-5842-840b-a92a4dc8861c", 00:36:21.923 "is_configured": true, 00:36:21.923 "data_offset": 2048, 00:36:21.923 "data_size": 63488 00:36:21.923 }, 00:36:21.923 { 00:36:21.923 "name": "BaseBdev2", 00:36:21.923 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:21.923 "is_configured": true, 00:36:21.923 "data_offset": 2048, 00:36:21.923 "data_size": 63488 00:36:21.923 }, 00:36:21.923 { 00:36:21.923 "name": "BaseBdev3", 00:36:21.923 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:21.923 "is_configured": true, 00:36:21.923 "data_offset": 2048, 00:36:21.923 "data_size": 63488 00:36:21.923 }, 00:36:21.923 { 00:36:21.923 "name": "BaseBdev4", 00:36:21.923 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:21.923 "is_configured": true, 00:36:21.923 "data_offset": 2048, 00:36:21.923 "data_size": 63488 00:36:21.923 } 00:36:21.923 ] 00:36:21.923 }' 00:36:21.923 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:21.923 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:22.489 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:36:22.489 01:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:22.747 [2024-07-25 01:02:45.232959] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:22.747 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=190464 00:36:22.747 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:22.747 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:23.005 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:36:23.005 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:36:23.005 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:36:23.005 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:36:23.005 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:36:23.005 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:23.005 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:23.005 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:23.005 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:23.006 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:23.006 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:36:23.006 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:23.006 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:23.006 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:23.264 [2024-07-25 01:02:45.684935] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:36:23.264 /dev/nbd0 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:23.264 1+0 records in 00:36:23.264 1+0 records out 00:36:23.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023831 s, 17.2 MB/s 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 192 00:36:23.264 01:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:36:23.832 496+0 records in 00:36:23.832 496+0 records out 00:36:23.832 97517568 bytes (98 MB, 93 MiB) copied, 0.486012 s, 201 MB/s 00:36:23.832 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:36:23.832 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:23.832 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:23.832 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:23.832 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:36:23.832 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:23.832 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:24.090 [2024-07-25 01:02:46.559047] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:24.090 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:24.090 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:24.090 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:24.090 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:24.090 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:24.090 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:24.090 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:24.090 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:24.091 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:36:24.091 [2024-07-25 01:02:46.736175] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:24.349 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:24.349 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:24.349 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:24.349 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:24.349 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:24.349 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:24.349 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:24.349 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:24.349 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:24.350 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:24.350 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:24.350 01:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.608 01:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:24.608 "name": "raid_bdev1", 00:36:24.608 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:24.608 "strip_size_kb": 64, 00:36:24.608 "state": "online", 00:36:24.608 "raid_level": "raid5f", 00:36:24.608 "superblock": true, 00:36:24.608 "num_base_bdevs": 4, 00:36:24.608 "num_base_bdevs_discovered": 3, 00:36:24.608 "num_base_bdevs_operational": 3, 00:36:24.608 "base_bdevs_list": [ 00:36:24.608 { 00:36:24.608 "name": null, 00:36:24.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.608 "is_configured": false, 00:36:24.608 "data_offset": 2048, 00:36:24.608 "data_size": 63488 00:36:24.608 }, 00:36:24.608 { 00:36:24.608 "name": "BaseBdev2", 00:36:24.608 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:24.608 "is_configured": true, 00:36:24.608 "data_offset": 2048, 00:36:24.608 "data_size": 63488 00:36:24.608 }, 00:36:24.608 { 00:36:24.608 "name": "BaseBdev3", 00:36:24.608 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:24.608 "is_configured": true, 00:36:24.608 "data_offset": 2048, 00:36:24.609 "data_size": 63488 00:36:24.609 }, 00:36:24.609 { 00:36:24.609 "name": "BaseBdev4", 00:36:24.609 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:24.609 "is_configured": true, 00:36:24.609 "data_offset": 2048, 00:36:24.609 "data_size": 63488 00:36:24.609 } 00:36:24.609 ] 00:36:24.609 }' 00:36:24.609 01:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:24.609 01:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.176 01:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:25.176 [2024-07-25 01:02:47.812366] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:25.176 [2024-07-25 01:02:47.827204] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:36:25.435 [2024-07-25 01:02:47.836139] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:25.435 01:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:36:26.371 01:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:26.371 01:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:26.371 01:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:26.371 01:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:26.371 01:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:26.371 01:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:26.371 01:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:26.629 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:26.629 "name": "raid_bdev1", 00:36:26.629 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:26.629 "strip_size_kb": 64, 00:36:26.629 "state": "online", 00:36:26.629 "raid_level": "raid5f", 00:36:26.629 "superblock": true, 00:36:26.629 "num_base_bdevs": 4, 00:36:26.629 "num_base_bdevs_discovered": 4, 00:36:26.629 "num_base_bdevs_operational": 4, 00:36:26.629 "process": { 00:36:26.629 "type": "rebuild", 00:36:26.629 "target": "spare", 00:36:26.629 "progress": { 00:36:26.630 "blocks": 23040, 00:36:26.630 "percent": 12 00:36:26.630 } 00:36:26.630 }, 00:36:26.630 "base_bdevs_list": [ 00:36:26.630 { 00:36:26.630 "name": "spare", 00:36:26.630 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:26.630 "is_configured": true, 00:36:26.630 "data_offset": 2048, 00:36:26.630 "data_size": 63488 00:36:26.630 }, 00:36:26.630 { 00:36:26.630 "name": "BaseBdev2", 00:36:26.630 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:26.630 "is_configured": true, 00:36:26.630 "data_offset": 2048, 00:36:26.630 "data_size": 63488 00:36:26.630 }, 00:36:26.630 { 00:36:26.630 "name": "BaseBdev3", 00:36:26.630 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:26.630 "is_configured": true, 00:36:26.630 "data_offset": 2048, 00:36:26.630 "data_size": 63488 00:36:26.630 }, 00:36:26.630 { 00:36:26.630 "name": "BaseBdev4", 00:36:26.630 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:26.630 "is_configured": true, 00:36:26.630 "data_offset": 2048, 00:36:26.630 "data_size": 63488 00:36:26.630 } 00:36:26.630 ] 00:36:26.630 }' 00:36:26.630 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:26.630 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:26.630 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:26.630 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:26.630 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:26.888 [2024-07-25 01:02:49.409646] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:26.888 [2024-07-25 01:02:49.448059] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:26.889 [2024-07-25 01:02:49.448170] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:26.889 [2024-07-25 01:02:49.448189] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:26.889 [2024-07-25 01:02:49.448197] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:26.889 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.147 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:27.147 "name": "raid_bdev1", 00:36:27.147 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:27.147 "strip_size_kb": 64, 00:36:27.147 "state": "online", 00:36:27.147 "raid_level": "raid5f", 00:36:27.147 "superblock": true, 00:36:27.147 "num_base_bdevs": 4, 00:36:27.147 "num_base_bdevs_discovered": 3, 00:36:27.147 "num_base_bdevs_operational": 3, 00:36:27.147 "base_bdevs_list": [ 00:36:27.147 { 00:36:27.147 "name": null, 00:36:27.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:27.147 "is_configured": false, 00:36:27.147 "data_offset": 2048, 00:36:27.147 "data_size": 63488 00:36:27.147 }, 00:36:27.147 { 00:36:27.147 "name": "BaseBdev2", 00:36:27.147 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:27.147 "is_configured": true, 00:36:27.147 "data_offset": 2048, 00:36:27.147 "data_size": 63488 00:36:27.147 }, 00:36:27.147 { 00:36:27.147 "name": "BaseBdev3", 00:36:27.147 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:27.147 "is_configured": true, 00:36:27.147 "data_offset": 2048, 00:36:27.147 "data_size": 63488 00:36:27.147 }, 00:36:27.147 { 00:36:27.147 "name": "BaseBdev4", 00:36:27.147 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:27.147 "is_configured": true, 00:36:27.147 "data_offset": 2048, 00:36:27.147 "data_size": 63488 00:36:27.147 } 00:36:27.147 ] 00:36:27.147 }' 00:36:27.147 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:27.147 01:02:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.714 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:27.715 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:27.715 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:27.715 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:27.715 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:27.715 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.715 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:27.973 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:27.973 "name": "raid_bdev1", 00:36:27.973 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:27.973 "strip_size_kb": 64, 00:36:27.973 "state": "online", 00:36:27.973 "raid_level": "raid5f", 00:36:27.973 "superblock": true, 00:36:27.973 "num_base_bdevs": 4, 00:36:27.973 "num_base_bdevs_discovered": 3, 00:36:27.973 "num_base_bdevs_operational": 3, 00:36:27.973 "base_bdevs_list": [ 00:36:27.973 { 00:36:27.973 "name": null, 00:36:27.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:27.973 "is_configured": false, 00:36:27.973 "data_offset": 2048, 00:36:27.973 "data_size": 63488 00:36:27.973 }, 00:36:27.973 { 00:36:27.973 "name": "BaseBdev2", 00:36:27.973 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:27.973 "is_configured": true, 00:36:27.973 "data_offset": 2048, 00:36:27.973 "data_size": 63488 00:36:27.973 }, 00:36:27.973 { 00:36:27.973 "name": "BaseBdev3", 00:36:27.973 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:27.973 "is_configured": true, 00:36:27.973 "data_offset": 2048, 00:36:27.973 "data_size": 63488 00:36:27.973 }, 00:36:27.973 { 00:36:27.973 "name": "BaseBdev4", 00:36:27.973 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:27.973 "is_configured": true, 00:36:27.973 "data_offset": 2048, 00:36:27.973 "data_size": 63488 00:36:27.973 } 00:36:27.973 ] 00:36:27.973 }' 00:36:27.974 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:27.974 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:27.974 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:27.974 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:27.974 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:28.232 [2024-07-25 01:02:50.636408] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:28.232 [2024-07-25 01:02:50.650141] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:36:28.232 [2024-07-25 01:02:50.659450] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:28.232 01:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:29.168 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:29.168 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:29.168 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:29.168 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:29.168 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:29.168 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:29.168 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:29.427 "name": "raid_bdev1", 00:36:29.427 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:29.427 "strip_size_kb": 64, 00:36:29.427 "state": "online", 00:36:29.427 "raid_level": "raid5f", 00:36:29.427 "superblock": true, 00:36:29.427 "num_base_bdevs": 4, 00:36:29.427 "num_base_bdevs_discovered": 4, 00:36:29.427 "num_base_bdevs_operational": 4, 00:36:29.427 "process": { 00:36:29.427 "type": "rebuild", 00:36:29.427 "target": "spare", 00:36:29.427 "progress": { 00:36:29.427 "blocks": 21120, 00:36:29.427 "percent": 11 00:36:29.427 } 00:36:29.427 }, 00:36:29.427 "base_bdevs_list": [ 00:36:29.427 { 00:36:29.427 "name": "spare", 00:36:29.427 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:29.427 "is_configured": true, 00:36:29.427 "data_offset": 2048, 00:36:29.427 "data_size": 63488 00:36:29.427 }, 00:36:29.427 { 00:36:29.427 "name": "BaseBdev2", 00:36:29.427 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:29.427 "is_configured": true, 00:36:29.427 "data_offset": 2048, 00:36:29.427 "data_size": 63488 00:36:29.427 }, 00:36:29.427 { 00:36:29.427 "name": "BaseBdev3", 00:36:29.427 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:29.427 "is_configured": true, 00:36:29.427 "data_offset": 2048, 00:36:29.427 "data_size": 63488 00:36:29.427 }, 00:36:29.427 { 00:36:29.427 "name": "BaseBdev4", 00:36:29.427 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:29.427 "is_configured": true, 00:36:29.427 "data_offset": 2048, 00:36:29.427 "data_size": 63488 00:36:29.427 } 00:36:29.427 ] 00:36:29.427 }' 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:36:29.427 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1248 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:29.427 01:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:29.686 01:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:29.686 "name": "raid_bdev1", 00:36:29.686 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:29.686 "strip_size_kb": 64, 00:36:29.686 "state": "online", 00:36:29.686 "raid_level": "raid5f", 00:36:29.686 "superblock": true, 00:36:29.686 "num_base_bdevs": 4, 00:36:29.686 "num_base_bdevs_discovered": 4, 00:36:29.686 "num_base_bdevs_operational": 4, 00:36:29.686 "process": { 00:36:29.686 "type": "rebuild", 00:36:29.686 "target": "spare", 00:36:29.686 "progress": { 00:36:29.686 "blocks": 26880, 00:36:29.686 "percent": 14 00:36:29.686 } 00:36:29.686 }, 00:36:29.686 "base_bdevs_list": [ 00:36:29.686 { 00:36:29.686 "name": "spare", 00:36:29.686 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:29.686 "is_configured": true, 00:36:29.686 "data_offset": 2048, 00:36:29.686 "data_size": 63488 00:36:29.686 }, 00:36:29.686 { 00:36:29.686 "name": "BaseBdev2", 00:36:29.686 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:29.686 "is_configured": true, 00:36:29.686 "data_offset": 2048, 00:36:29.686 "data_size": 63488 00:36:29.686 }, 00:36:29.686 { 00:36:29.686 "name": "BaseBdev3", 00:36:29.686 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:29.686 "is_configured": true, 00:36:29.686 "data_offset": 2048, 00:36:29.686 "data_size": 63488 00:36:29.686 }, 00:36:29.686 { 00:36:29.686 "name": "BaseBdev4", 00:36:29.686 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:29.686 "is_configured": true, 00:36:29.686 "data_offset": 2048, 00:36:29.686 "data_size": 63488 00:36:29.686 } 00:36:29.686 ] 00:36:29.686 }' 00:36:29.686 01:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:29.686 01:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:29.686 01:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:29.686 01:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:29.686 01:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:31.063 "name": "raid_bdev1", 00:36:31.063 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:31.063 "strip_size_kb": 64, 00:36:31.063 "state": "online", 00:36:31.063 "raid_level": "raid5f", 00:36:31.063 "superblock": true, 00:36:31.063 "num_base_bdevs": 4, 00:36:31.063 "num_base_bdevs_discovered": 4, 00:36:31.063 "num_base_bdevs_operational": 4, 00:36:31.063 "process": { 00:36:31.063 "type": "rebuild", 00:36:31.063 "target": "spare", 00:36:31.063 "progress": { 00:36:31.063 "blocks": 53760, 00:36:31.063 "percent": 28 00:36:31.063 } 00:36:31.063 }, 00:36:31.063 "base_bdevs_list": [ 00:36:31.063 { 00:36:31.063 "name": "spare", 00:36:31.063 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:31.063 "is_configured": true, 00:36:31.063 "data_offset": 2048, 00:36:31.063 "data_size": 63488 00:36:31.063 }, 00:36:31.063 { 00:36:31.063 "name": "BaseBdev2", 00:36:31.063 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:31.063 "is_configured": true, 00:36:31.063 "data_offset": 2048, 00:36:31.063 "data_size": 63488 00:36:31.063 }, 00:36:31.063 { 00:36:31.063 "name": "BaseBdev3", 00:36:31.063 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:31.063 "is_configured": true, 00:36:31.063 "data_offset": 2048, 00:36:31.063 "data_size": 63488 00:36:31.063 }, 00:36:31.063 { 00:36:31.063 "name": "BaseBdev4", 00:36:31.063 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:31.063 "is_configured": true, 00:36:31.063 "data_offset": 2048, 00:36:31.063 "data_size": 63488 00:36:31.063 } 00:36:31.063 ] 00:36:31.063 }' 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:31.063 01:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:31.998 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:31.998 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:31.998 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:31.998 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:31.998 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:31.998 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:31.998 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:31.998 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:32.565 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:32.565 "name": "raid_bdev1", 00:36:32.565 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:32.565 "strip_size_kb": 64, 00:36:32.565 "state": "online", 00:36:32.565 "raid_level": "raid5f", 00:36:32.565 "superblock": true, 00:36:32.565 "num_base_bdevs": 4, 00:36:32.565 "num_base_bdevs_discovered": 4, 00:36:32.565 "num_base_bdevs_operational": 4, 00:36:32.565 "process": { 00:36:32.565 "type": "rebuild", 00:36:32.565 "target": "spare", 00:36:32.565 "progress": { 00:36:32.565 "blocks": 80640, 00:36:32.566 "percent": 42 00:36:32.566 } 00:36:32.566 }, 00:36:32.566 "base_bdevs_list": [ 00:36:32.566 { 00:36:32.566 "name": "spare", 00:36:32.566 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:32.566 "is_configured": true, 00:36:32.566 "data_offset": 2048, 00:36:32.566 "data_size": 63488 00:36:32.566 }, 00:36:32.566 { 00:36:32.566 "name": "BaseBdev2", 00:36:32.566 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:32.566 "is_configured": true, 00:36:32.566 "data_offset": 2048, 00:36:32.566 "data_size": 63488 00:36:32.566 }, 00:36:32.566 { 00:36:32.566 "name": "BaseBdev3", 00:36:32.566 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:32.566 "is_configured": true, 00:36:32.566 "data_offset": 2048, 00:36:32.566 "data_size": 63488 00:36:32.566 }, 00:36:32.566 { 00:36:32.566 "name": "BaseBdev4", 00:36:32.566 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:32.566 "is_configured": true, 00:36:32.566 "data_offset": 2048, 00:36:32.566 "data_size": 63488 00:36:32.566 } 00:36:32.566 ] 00:36:32.566 }' 00:36:32.566 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:32.566 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:32.566 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:32.566 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:32.566 01:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:33.502 01:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:33.503 01:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:33.503 01:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:33.503 01:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:33.503 01:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:33.503 01:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:33.503 01:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:33.503 01:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:33.762 01:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:33.762 "name": "raid_bdev1", 00:36:33.762 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:33.762 "strip_size_kb": 64, 00:36:33.762 "state": "online", 00:36:33.762 "raid_level": "raid5f", 00:36:33.762 "superblock": true, 00:36:33.762 "num_base_bdevs": 4, 00:36:33.762 "num_base_bdevs_discovered": 4, 00:36:33.762 "num_base_bdevs_operational": 4, 00:36:33.762 "process": { 00:36:33.762 "type": "rebuild", 00:36:33.762 "target": "spare", 00:36:33.762 "progress": { 00:36:33.762 "blocks": 105600, 00:36:33.762 "percent": 55 00:36:33.762 } 00:36:33.762 }, 00:36:33.762 "base_bdevs_list": [ 00:36:33.762 { 00:36:33.762 "name": "spare", 00:36:33.762 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:33.762 "is_configured": true, 00:36:33.762 "data_offset": 2048, 00:36:33.762 "data_size": 63488 00:36:33.762 }, 00:36:33.762 { 00:36:33.762 "name": "BaseBdev2", 00:36:33.762 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:33.762 "is_configured": true, 00:36:33.762 "data_offset": 2048, 00:36:33.762 "data_size": 63488 00:36:33.762 }, 00:36:33.762 { 00:36:33.762 "name": "BaseBdev3", 00:36:33.762 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:33.762 "is_configured": true, 00:36:33.762 "data_offset": 2048, 00:36:33.762 "data_size": 63488 00:36:33.762 }, 00:36:33.762 { 00:36:33.762 "name": "BaseBdev4", 00:36:33.762 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:33.762 "is_configured": true, 00:36:33.762 "data_offset": 2048, 00:36:33.762 "data_size": 63488 00:36:33.762 } 00:36:33.762 ] 00:36:33.762 }' 00:36:33.762 01:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:33.762 01:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:33.762 01:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:33.762 01:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:33.762 01:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:34.698 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:34.698 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:34.698 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:34.698 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:34.698 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:34.698 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:34.698 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.698 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.975 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:34.975 "name": "raid_bdev1", 00:36:34.975 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:34.975 "strip_size_kb": 64, 00:36:34.975 "state": "online", 00:36:34.975 "raid_level": "raid5f", 00:36:34.975 "superblock": true, 00:36:34.975 "num_base_bdevs": 4, 00:36:34.975 "num_base_bdevs_discovered": 4, 00:36:34.975 "num_base_bdevs_operational": 4, 00:36:34.975 "process": { 00:36:34.975 "type": "rebuild", 00:36:34.975 "target": "spare", 00:36:34.975 "progress": { 00:36:34.975 "blocks": 130560, 00:36:34.975 "percent": 68 00:36:34.975 } 00:36:34.975 }, 00:36:34.975 "base_bdevs_list": [ 00:36:34.975 { 00:36:34.975 "name": "spare", 00:36:34.975 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:34.975 "is_configured": true, 00:36:34.975 "data_offset": 2048, 00:36:34.975 "data_size": 63488 00:36:34.975 }, 00:36:34.975 { 00:36:34.975 "name": "BaseBdev2", 00:36:34.975 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:34.975 "is_configured": true, 00:36:34.975 "data_offset": 2048, 00:36:34.975 "data_size": 63488 00:36:34.975 }, 00:36:34.975 { 00:36:34.975 "name": "BaseBdev3", 00:36:34.975 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:34.975 "is_configured": true, 00:36:34.975 "data_offset": 2048, 00:36:34.975 "data_size": 63488 00:36:34.975 }, 00:36:34.975 { 00:36:34.975 "name": "BaseBdev4", 00:36:34.975 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:34.975 "is_configured": true, 00:36:34.975 "data_offset": 2048, 00:36:34.975 "data_size": 63488 00:36:34.975 } 00:36:34.975 ] 00:36:34.975 }' 00:36:34.975 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:35.244 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:35.244 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:35.244 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:35.244 01:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:36.181 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:36.181 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:36.181 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:36.181 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:36.181 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:36.181 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:36.181 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:36.181 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:36.440 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:36.440 "name": "raid_bdev1", 00:36:36.440 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:36.440 "strip_size_kb": 64, 00:36:36.440 "state": "online", 00:36:36.440 "raid_level": "raid5f", 00:36:36.440 "superblock": true, 00:36:36.440 "num_base_bdevs": 4, 00:36:36.440 "num_base_bdevs_discovered": 4, 00:36:36.440 "num_base_bdevs_operational": 4, 00:36:36.440 "process": { 00:36:36.440 "type": "rebuild", 00:36:36.440 "target": "spare", 00:36:36.440 "progress": { 00:36:36.440 "blocks": 155520, 00:36:36.440 "percent": 81 00:36:36.440 } 00:36:36.440 }, 00:36:36.440 "base_bdevs_list": [ 00:36:36.440 { 00:36:36.440 "name": "spare", 00:36:36.440 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:36.440 "is_configured": true, 00:36:36.440 "data_offset": 2048, 00:36:36.440 "data_size": 63488 00:36:36.440 }, 00:36:36.440 { 00:36:36.440 "name": "BaseBdev2", 00:36:36.440 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:36.440 "is_configured": true, 00:36:36.440 "data_offset": 2048, 00:36:36.440 "data_size": 63488 00:36:36.440 }, 00:36:36.440 { 00:36:36.440 "name": "BaseBdev3", 00:36:36.440 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:36.440 "is_configured": true, 00:36:36.440 "data_offset": 2048, 00:36:36.440 "data_size": 63488 00:36:36.440 }, 00:36:36.440 { 00:36:36.440 "name": "BaseBdev4", 00:36:36.440 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:36.440 "is_configured": true, 00:36:36.440 "data_offset": 2048, 00:36:36.440 "data_size": 63488 00:36:36.440 } 00:36:36.440 ] 00:36:36.440 }' 00:36:36.440 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:36.440 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:36.440 01:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:36.440 01:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:36.440 01:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:37.818 "name": "raid_bdev1", 00:36:37.818 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:37.818 "strip_size_kb": 64, 00:36:37.818 "state": "online", 00:36:37.818 "raid_level": "raid5f", 00:36:37.818 "superblock": true, 00:36:37.818 "num_base_bdevs": 4, 00:36:37.818 "num_base_bdevs_discovered": 4, 00:36:37.818 "num_base_bdevs_operational": 4, 00:36:37.818 "process": { 00:36:37.818 "type": "rebuild", 00:36:37.818 "target": "spare", 00:36:37.818 "progress": { 00:36:37.818 "blocks": 182400, 00:36:37.818 "percent": 95 00:36:37.818 } 00:36:37.818 }, 00:36:37.818 "base_bdevs_list": [ 00:36:37.818 { 00:36:37.818 "name": "spare", 00:36:37.818 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:37.818 "is_configured": true, 00:36:37.818 "data_offset": 2048, 00:36:37.818 "data_size": 63488 00:36:37.818 }, 00:36:37.818 { 00:36:37.818 "name": "BaseBdev2", 00:36:37.818 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:37.818 "is_configured": true, 00:36:37.818 "data_offset": 2048, 00:36:37.818 "data_size": 63488 00:36:37.818 }, 00:36:37.818 { 00:36:37.818 "name": "BaseBdev3", 00:36:37.818 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:37.818 "is_configured": true, 00:36:37.818 "data_offset": 2048, 00:36:37.818 "data_size": 63488 00:36:37.818 }, 00:36:37.818 { 00:36:37.818 "name": "BaseBdev4", 00:36:37.818 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:37.818 "is_configured": true, 00:36:37.818 "data_offset": 2048, 00:36:37.818 "data_size": 63488 00:36:37.818 } 00:36:37.818 ] 00:36:37.818 }' 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:37.818 01:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:38.384 [2024-07-25 01:03:00.735988] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:38.384 [2024-07-25 01:03:00.736076] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:38.385 [2024-07-25 01:03:00.736211] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:38.951 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:38.951 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:38.951 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:38.951 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:38.951 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:38.951 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:38.951 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:38.951 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.209 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:39.209 "name": "raid_bdev1", 00:36:39.209 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:39.209 "strip_size_kb": 64, 00:36:39.209 "state": "online", 00:36:39.209 "raid_level": "raid5f", 00:36:39.209 "superblock": true, 00:36:39.209 "num_base_bdevs": 4, 00:36:39.209 "num_base_bdevs_discovered": 4, 00:36:39.209 "num_base_bdevs_operational": 4, 00:36:39.209 "base_bdevs_list": [ 00:36:39.209 { 00:36:39.209 "name": "spare", 00:36:39.209 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:39.209 "is_configured": true, 00:36:39.209 "data_offset": 2048, 00:36:39.209 "data_size": 63488 00:36:39.209 }, 00:36:39.209 { 00:36:39.209 "name": "BaseBdev2", 00:36:39.209 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:39.209 "is_configured": true, 00:36:39.209 "data_offset": 2048, 00:36:39.209 "data_size": 63488 00:36:39.209 }, 00:36:39.209 { 00:36:39.209 "name": "BaseBdev3", 00:36:39.209 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:39.209 "is_configured": true, 00:36:39.209 "data_offset": 2048, 00:36:39.209 "data_size": 63488 00:36:39.209 }, 00:36:39.209 { 00:36:39.209 "name": "BaseBdev4", 00:36:39.209 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:39.209 "is_configured": true, 00:36:39.209 "data_offset": 2048, 00:36:39.209 "data_size": 63488 00:36:39.209 } 00:36:39.209 ] 00:36:39.209 }' 00:36:39.209 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:39.209 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:39.209 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:39.209 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:36:39.209 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:36:39.210 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:39.210 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:39.210 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:39.210 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:39.210 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:39.210 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:39.210 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.468 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:39.468 "name": "raid_bdev1", 00:36:39.468 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:39.468 "strip_size_kb": 64, 00:36:39.468 "state": "online", 00:36:39.468 "raid_level": "raid5f", 00:36:39.468 "superblock": true, 00:36:39.468 "num_base_bdevs": 4, 00:36:39.468 "num_base_bdevs_discovered": 4, 00:36:39.468 "num_base_bdevs_operational": 4, 00:36:39.468 "base_bdevs_list": [ 00:36:39.468 { 00:36:39.468 "name": "spare", 00:36:39.468 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:39.468 "is_configured": true, 00:36:39.468 "data_offset": 2048, 00:36:39.468 "data_size": 63488 00:36:39.468 }, 00:36:39.468 { 00:36:39.468 "name": "BaseBdev2", 00:36:39.468 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:39.468 "is_configured": true, 00:36:39.468 "data_offset": 2048, 00:36:39.468 "data_size": 63488 00:36:39.468 }, 00:36:39.468 { 00:36:39.468 "name": "BaseBdev3", 00:36:39.468 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:39.468 "is_configured": true, 00:36:39.469 "data_offset": 2048, 00:36:39.469 "data_size": 63488 00:36:39.469 }, 00:36:39.469 { 00:36:39.469 "name": "BaseBdev4", 00:36:39.469 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:39.469 "is_configured": true, 00:36:39.469 "data_offset": 2048, 00:36:39.469 "data_size": 63488 00:36:39.469 } 00:36:39.469 ] 00:36:39.469 }' 00:36:39.469 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:39.469 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:39.469 01:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:39.469 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.728 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:39.728 "name": "raid_bdev1", 00:36:39.728 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:39.728 "strip_size_kb": 64, 00:36:39.728 "state": "online", 00:36:39.728 "raid_level": "raid5f", 00:36:39.728 "superblock": true, 00:36:39.728 "num_base_bdevs": 4, 00:36:39.728 "num_base_bdevs_discovered": 4, 00:36:39.728 "num_base_bdevs_operational": 4, 00:36:39.728 "base_bdevs_list": [ 00:36:39.728 { 00:36:39.728 "name": "spare", 00:36:39.728 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:39.728 "is_configured": true, 00:36:39.728 "data_offset": 2048, 00:36:39.728 "data_size": 63488 00:36:39.728 }, 00:36:39.728 { 00:36:39.728 "name": "BaseBdev2", 00:36:39.728 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:39.728 "is_configured": true, 00:36:39.728 "data_offset": 2048, 00:36:39.728 "data_size": 63488 00:36:39.728 }, 00:36:39.728 { 00:36:39.728 "name": "BaseBdev3", 00:36:39.728 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:39.728 "is_configured": true, 00:36:39.728 "data_offset": 2048, 00:36:39.728 "data_size": 63488 00:36:39.728 }, 00:36:39.728 { 00:36:39.728 "name": "BaseBdev4", 00:36:39.728 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:39.728 "is_configured": true, 00:36:39.728 "data_offset": 2048, 00:36:39.728 "data_size": 63488 00:36:39.728 } 00:36:39.728 ] 00:36:39.728 }' 00:36:39.728 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:39.728 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:40.295 01:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:40.553 [2024-07-25 01:03:03.004689] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:40.553 [2024-07-25 01:03:03.004721] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:40.553 [2024-07-25 01:03:03.004811] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:40.553 [2024-07-25 01:03:03.004902] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:40.553 [2024-07-25 01:03:03.004913] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:36:40.553 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:36:40.553 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:40.812 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:41.070 /dev/nbd0 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:41.070 1+0 records in 00:36:41.070 1+0 records out 00:36:41.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176724 s, 23.2 MB/s 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:41.070 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:36:41.070 /dev/nbd1 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:41.328 1+0 records in 00:36:41.328 1+0 records out 00:36:41.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271566 s, 15.1 MB/s 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:41.328 01:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:41.586 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:41.586 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:41.586 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:41.586 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:41.586 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:41.586 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:41.586 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:41.586 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:41.586 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:41.586 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:36:41.845 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:41.845 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:41.845 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:41.845 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:41.845 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:41.845 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:41.845 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:41.845 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:41.845 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:36:41.845 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:42.103 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:42.362 [2024-07-25 01:03:04.823004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:42.362 [2024-07-25 01:03:04.823099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:42.362 [2024-07-25 01:03:04.823143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:36:42.362 [2024-07-25 01:03:04.823171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:42.362 [2024-07-25 01:03:04.825525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:42.362 [2024-07-25 01:03:04.825602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:42.362 [2024-07-25 01:03:04.825737] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:42.362 [2024-07-25 01:03:04.825792] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:42.362 [2024-07-25 01:03:04.825952] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:42.362 [2024-07-25 01:03:04.826032] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:42.362 [2024-07-25 01:03:04.826109] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:42.362 spare 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:42.362 01:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:42.362 [2024-07-25 01:03:04.926198] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:36:42.362 [2024-07-25 01:03:04.926225] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:42.362 [2024-07-25 01:03:04.926373] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049440 00:36:42.362 [2024-07-25 01:03:04.933210] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:36:42.362 [2024-07-25 01:03:04.933237] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:36:42.362 [2024-07-25 01:03:04.933400] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:42.621 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:42.621 "name": "raid_bdev1", 00:36:42.621 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:42.621 "strip_size_kb": 64, 00:36:42.621 "state": "online", 00:36:42.621 "raid_level": "raid5f", 00:36:42.621 "superblock": true, 00:36:42.621 "num_base_bdevs": 4, 00:36:42.621 "num_base_bdevs_discovered": 4, 00:36:42.621 "num_base_bdevs_operational": 4, 00:36:42.621 "base_bdevs_list": [ 00:36:42.621 { 00:36:42.621 "name": "spare", 00:36:42.621 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:42.621 "is_configured": true, 00:36:42.621 "data_offset": 2048, 00:36:42.621 "data_size": 63488 00:36:42.621 }, 00:36:42.621 { 00:36:42.621 "name": "BaseBdev2", 00:36:42.621 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:42.621 "is_configured": true, 00:36:42.621 "data_offset": 2048, 00:36:42.621 "data_size": 63488 00:36:42.621 }, 00:36:42.621 { 00:36:42.621 "name": "BaseBdev3", 00:36:42.621 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:42.621 "is_configured": true, 00:36:42.621 "data_offset": 2048, 00:36:42.621 "data_size": 63488 00:36:42.621 }, 00:36:42.621 { 00:36:42.621 "name": "BaseBdev4", 00:36:42.621 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:42.621 "is_configured": true, 00:36:42.621 "data_offset": 2048, 00:36:42.621 "data_size": 63488 00:36:42.622 } 00:36:42.622 ] 00:36:42.622 }' 00:36:42.622 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:42.622 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:43.189 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:43.189 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:43.189 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:43.189 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:43.189 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:43.189 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:43.189 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:43.448 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:43.448 "name": "raid_bdev1", 00:36:43.448 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:43.448 "strip_size_kb": 64, 00:36:43.448 "state": "online", 00:36:43.448 "raid_level": "raid5f", 00:36:43.448 "superblock": true, 00:36:43.448 "num_base_bdevs": 4, 00:36:43.448 "num_base_bdevs_discovered": 4, 00:36:43.448 "num_base_bdevs_operational": 4, 00:36:43.448 "base_bdevs_list": [ 00:36:43.448 { 00:36:43.448 "name": "spare", 00:36:43.448 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:43.448 "is_configured": true, 00:36:43.448 "data_offset": 2048, 00:36:43.448 "data_size": 63488 00:36:43.448 }, 00:36:43.448 { 00:36:43.448 "name": "BaseBdev2", 00:36:43.448 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:43.448 "is_configured": true, 00:36:43.449 "data_offset": 2048, 00:36:43.449 "data_size": 63488 00:36:43.449 }, 00:36:43.449 { 00:36:43.449 "name": "BaseBdev3", 00:36:43.449 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:43.449 "is_configured": true, 00:36:43.449 "data_offset": 2048, 00:36:43.449 "data_size": 63488 00:36:43.449 }, 00:36:43.449 { 00:36:43.449 "name": "BaseBdev4", 00:36:43.449 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:43.449 "is_configured": true, 00:36:43.449 "data_offset": 2048, 00:36:43.449 "data_size": 63488 00:36:43.449 } 00:36:43.449 ] 00:36:43.449 }' 00:36:43.449 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:43.449 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:43.449 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:43.449 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:43.449 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:43.449 01:03:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:43.708 [2024-07-25 01:03:06.294209] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:43.708 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:43.969 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:43.969 "name": "raid_bdev1", 00:36:43.969 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:43.969 "strip_size_kb": 64, 00:36:43.969 "state": "online", 00:36:43.969 "raid_level": "raid5f", 00:36:43.969 "superblock": true, 00:36:43.969 "num_base_bdevs": 4, 00:36:43.969 "num_base_bdevs_discovered": 3, 00:36:43.969 "num_base_bdevs_operational": 3, 00:36:43.969 "base_bdevs_list": [ 00:36:43.969 { 00:36:43.969 "name": null, 00:36:43.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:43.969 "is_configured": false, 00:36:43.969 "data_offset": 2048, 00:36:43.969 "data_size": 63488 00:36:43.969 }, 00:36:43.969 { 00:36:43.969 "name": "BaseBdev2", 00:36:43.969 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:43.969 "is_configured": true, 00:36:43.969 "data_offset": 2048, 00:36:43.969 "data_size": 63488 00:36:43.969 }, 00:36:43.969 { 00:36:43.969 "name": "BaseBdev3", 00:36:43.969 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:43.969 "is_configured": true, 00:36:43.969 "data_offset": 2048, 00:36:43.969 "data_size": 63488 00:36:43.969 }, 00:36:43.970 { 00:36:43.970 "name": "BaseBdev4", 00:36:43.970 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:43.970 "is_configured": true, 00:36:43.970 "data_offset": 2048, 00:36:43.970 "data_size": 63488 00:36:43.970 } 00:36:43.970 ] 00:36:43.970 }' 00:36:43.970 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:43.970 01:03:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:44.537 01:03:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:44.796 [2024-07-25 01:03:07.434482] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:44.796 [2024-07-25 01:03:07.434662] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:44.796 [2024-07-25 01:03:07.434674] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:44.796 [2024-07-25 01:03:07.434729] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:45.055 [2024-07-25 01:03:07.448995] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000495e0 00:36:45.055 [2024-07-25 01:03:07.457849] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:45.055 01:03:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:36:45.991 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:45.991 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:45.991 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:45.991 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:45.991 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:45.991 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:45.991 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:46.250 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:46.250 "name": "raid_bdev1", 00:36:46.250 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:46.250 "strip_size_kb": 64, 00:36:46.250 "state": "online", 00:36:46.250 "raid_level": "raid5f", 00:36:46.250 "superblock": true, 00:36:46.250 "num_base_bdevs": 4, 00:36:46.250 "num_base_bdevs_discovered": 4, 00:36:46.250 "num_base_bdevs_operational": 4, 00:36:46.250 "process": { 00:36:46.250 "type": "rebuild", 00:36:46.250 "target": "spare", 00:36:46.250 "progress": { 00:36:46.250 "blocks": 23040, 00:36:46.250 "percent": 12 00:36:46.250 } 00:36:46.250 }, 00:36:46.250 "base_bdevs_list": [ 00:36:46.250 { 00:36:46.250 "name": "spare", 00:36:46.250 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:46.250 "is_configured": true, 00:36:46.250 "data_offset": 2048, 00:36:46.250 "data_size": 63488 00:36:46.250 }, 00:36:46.250 { 00:36:46.250 "name": "BaseBdev2", 00:36:46.250 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:46.250 "is_configured": true, 00:36:46.250 "data_offset": 2048, 00:36:46.250 "data_size": 63488 00:36:46.250 }, 00:36:46.250 { 00:36:46.250 "name": "BaseBdev3", 00:36:46.250 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:46.250 "is_configured": true, 00:36:46.250 "data_offset": 2048, 00:36:46.250 "data_size": 63488 00:36:46.250 }, 00:36:46.250 { 00:36:46.250 "name": "BaseBdev4", 00:36:46.250 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:46.250 "is_configured": true, 00:36:46.250 "data_offset": 2048, 00:36:46.250 "data_size": 63488 00:36:46.250 } 00:36:46.250 ] 00:36:46.250 }' 00:36:46.250 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:46.250 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:46.250 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:46.250 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:46.250 01:03:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:46.509 [2024-07-25 01:03:09.043207] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:46.509 [2024-07-25 01:03:09.069059] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:46.509 [2024-07-25 01:03:09.069120] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:46.509 [2024-07-25 01:03:09.069137] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:46.509 [2024-07-25 01:03:09.069144] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:46.509 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:46.768 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:46.768 "name": "raid_bdev1", 00:36:46.768 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:46.768 "strip_size_kb": 64, 00:36:46.768 "state": "online", 00:36:46.768 "raid_level": "raid5f", 00:36:46.768 "superblock": true, 00:36:46.768 "num_base_bdevs": 4, 00:36:46.768 "num_base_bdevs_discovered": 3, 00:36:46.768 "num_base_bdevs_operational": 3, 00:36:46.768 "base_bdevs_list": [ 00:36:46.768 { 00:36:46.768 "name": null, 00:36:46.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.768 "is_configured": false, 00:36:46.768 "data_offset": 2048, 00:36:46.768 "data_size": 63488 00:36:46.768 }, 00:36:46.768 { 00:36:46.768 "name": "BaseBdev2", 00:36:46.768 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:46.768 "is_configured": true, 00:36:46.768 "data_offset": 2048, 00:36:46.768 "data_size": 63488 00:36:46.768 }, 00:36:46.768 { 00:36:46.768 "name": "BaseBdev3", 00:36:46.768 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:46.768 "is_configured": true, 00:36:46.768 "data_offset": 2048, 00:36:46.768 "data_size": 63488 00:36:46.768 }, 00:36:46.768 { 00:36:46.768 "name": "BaseBdev4", 00:36:46.768 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:46.768 "is_configured": true, 00:36:46.768 "data_offset": 2048, 00:36:46.768 "data_size": 63488 00:36:46.768 } 00:36:46.768 ] 00:36:46.768 }' 00:36:46.768 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:46.768 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:47.336 01:03:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:47.594 [2024-07-25 01:03:10.041973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:47.594 [2024-07-25 01:03:10.042066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:47.594 [2024-07-25 01:03:10.042110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:36:47.594 [2024-07-25 01:03:10.042130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:47.594 [2024-07-25 01:03:10.042636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:47.594 [2024-07-25 01:03:10.042679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:47.594 [2024-07-25 01:03:10.042794] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:47.594 [2024-07-25 01:03:10.042808] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:47.594 [2024-07-25 01:03:10.042816] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:47.594 [2024-07-25 01:03:10.042846] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:47.594 [2024-07-25 01:03:10.056957] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049920 00:36:47.594 spare 00:36:47.594 [2024-07-25 01:03:10.065530] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:47.594 01:03:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:36:48.530 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:48.530 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:48.530 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:48.530 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:48.530 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:48.530 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:48.530 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:48.788 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:48.788 "name": "raid_bdev1", 00:36:48.788 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:48.788 "strip_size_kb": 64, 00:36:48.788 "state": "online", 00:36:48.788 "raid_level": "raid5f", 00:36:48.788 "superblock": true, 00:36:48.788 "num_base_bdevs": 4, 00:36:48.788 "num_base_bdevs_discovered": 4, 00:36:48.788 "num_base_bdevs_operational": 4, 00:36:48.788 "process": { 00:36:48.788 "type": "rebuild", 00:36:48.788 "target": "spare", 00:36:48.788 "progress": { 00:36:48.788 "blocks": 23040, 00:36:48.788 "percent": 12 00:36:48.788 } 00:36:48.788 }, 00:36:48.788 "base_bdevs_list": [ 00:36:48.788 { 00:36:48.788 "name": "spare", 00:36:48.788 "uuid": "ec9440a0-f567-52ae-84f7-db14f8a31738", 00:36:48.788 "is_configured": true, 00:36:48.788 "data_offset": 2048, 00:36:48.788 "data_size": 63488 00:36:48.788 }, 00:36:48.788 { 00:36:48.788 "name": "BaseBdev2", 00:36:48.788 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:48.788 "is_configured": true, 00:36:48.788 "data_offset": 2048, 00:36:48.788 "data_size": 63488 00:36:48.788 }, 00:36:48.788 { 00:36:48.788 "name": "BaseBdev3", 00:36:48.788 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:48.788 "is_configured": true, 00:36:48.788 "data_offset": 2048, 00:36:48.788 "data_size": 63488 00:36:48.788 }, 00:36:48.788 { 00:36:48.788 "name": "BaseBdev4", 00:36:48.788 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:48.788 "is_configured": true, 00:36:48.788 "data_offset": 2048, 00:36:48.788 "data_size": 63488 00:36:48.788 } 00:36:48.788 ] 00:36:48.788 }' 00:36:48.788 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:48.788 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:48.788 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:48.788 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:48.788 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:49.047 [2024-07-25 01:03:11.558965] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:49.047 [2024-07-25 01:03:11.576252] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:49.047 [2024-07-25 01:03:11.576328] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:49.047 [2024-07-25 01:03:11.576346] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:49.047 [2024-07-25 01:03:11.576363] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:49.047 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:49.306 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:49.306 "name": "raid_bdev1", 00:36:49.306 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:49.306 "strip_size_kb": 64, 00:36:49.306 "state": "online", 00:36:49.306 "raid_level": "raid5f", 00:36:49.306 "superblock": true, 00:36:49.306 "num_base_bdevs": 4, 00:36:49.306 "num_base_bdevs_discovered": 3, 00:36:49.306 "num_base_bdevs_operational": 3, 00:36:49.306 "base_bdevs_list": [ 00:36:49.306 { 00:36:49.306 "name": null, 00:36:49.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.306 "is_configured": false, 00:36:49.306 "data_offset": 2048, 00:36:49.306 "data_size": 63488 00:36:49.306 }, 00:36:49.306 { 00:36:49.306 "name": "BaseBdev2", 00:36:49.306 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:49.306 "is_configured": true, 00:36:49.306 "data_offset": 2048, 00:36:49.306 "data_size": 63488 00:36:49.306 }, 00:36:49.306 { 00:36:49.306 "name": "BaseBdev3", 00:36:49.306 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:49.306 "is_configured": true, 00:36:49.306 "data_offset": 2048, 00:36:49.306 "data_size": 63488 00:36:49.306 }, 00:36:49.306 { 00:36:49.306 "name": "BaseBdev4", 00:36:49.306 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:49.306 "is_configured": true, 00:36:49.306 "data_offset": 2048, 00:36:49.306 "data_size": 63488 00:36:49.306 } 00:36:49.306 ] 00:36:49.306 }' 00:36:49.306 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:49.306 01:03:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.873 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:49.873 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:49.873 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:49.873 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:49.873 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:49.873 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:49.873 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:50.132 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:50.132 "name": "raid_bdev1", 00:36:50.132 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:50.132 "strip_size_kb": 64, 00:36:50.132 "state": "online", 00:36:50.132 "raid_level": "raid5f", 00:36:50.132 "superblock": true, 00:36:50.132 "num_base_bdevs": 4, 00:36:50.132 "num_base_bdevs_discovered": 3, 00:36:50.132 "num_base_bdevs_operational": 3, 00:36:50.132 "base_bdevs_list": [ 00:36:50.132 { 00:36:50.132 "name": null, 00:36:50.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:50.132 "is_configured": false, 00:36:50.132 "data_offset": 2048, 00:36:50.132 "data_size": 63488 00:36:50.132 }, 00:36:50.132 { 00:36:50.132 "name": "BaseBdev2", 00:36:50.132 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:50.132 "is_configured": true, 00:36:50.132 "data_offset": 2048, 00:36:50.132 "data_size": 63488 00:36:50.132 }, 00:36:50.132 { 00:36:50.132 "name": "BaseBdev3", 00:36:50.132 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:50.132 "is_configured": true, 00:36:50.132 "data_offset": 2048, 00:36:50.132 "data_size": 63488 00:36:50.132 }, 00:36:50.132 { 00:36:50.132 "name": "BaseBdev4", 00:36:50.132 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:50.132 "is_configured": true, 00:36:50.132 "data_offset": 2048, 00:36:50.132 "data_size": 63488 00:36:50.132 } 00:36:50.132 ] 00:36:50.132 }' 00:36:50.132 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:50.132 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:50.132 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:50.132 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:50.132 01:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:36:50.391 01:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:50.650 [2024-07-25 01:03:13.197700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:50.650 [2024-07-25 01:03:13.197787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:50.650 [2024-07-25 01:03:13.197830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:36:50.650 [2024-07-25 01:03:13.197850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:50.650 [2024-07-25 01:03:13.198306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:50.650 [2024-07-25 01:03:13.198336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:50.650 [2024-07-25 01:03:13.198461] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:50.650 [2024-07-25 01:03:13.198474] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:50.650 [2024-07-25 01:03:13.198481] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:50.650 BaseBdev1 00:36:50.650 01:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:51.585 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:51.844 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:51.844 "name": "raid_bdev1", 00:36:51.844 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:51.844 "strip_size_kb": 64, 00:36:51.844 "state": "online", 00:36:51.844 "raid_level": "raid5f", 00:36:51.844 "superblock": true, 00:36:51.844 "num_base_bdevs": 4, 00:36:51.844 "num_base_bdevs_discovered": 3, 00:36:51.844 "num_base_bdevs_operational": 3, 00:36:51.844 "base_bdevs_list": [ 00:36:51.844 { 00:36:51.844 "name": null, 00:36:51.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:51.844 "is_configured": false, 00:36:51.844 "data_offset": 2048, 00:36:51.844 "data_size": 63488 00:36:51.844 }, 00:36:51.844 { 00:36:51.844 "name": "BaseBdev2", 00:36:51.844 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:51.844 "is_configured": true, 00:36:51.844 "data_offset": 2048, 00:36:51.844 "data_size": 63488 00:36:51.844 }, 00:36:51.844 { 00:36:51.844 "name": "BaseBdev3", 00:36:51.844 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:51.844 "is_configured": true, 00:36:51.844 "data_offset": 2048, 00:36:51.844 "data_size": 63488 00:36:51.844 }, 00:36:51.844 { 00:36:51.844 "name": "BaseBdev4", 00:36:51.844 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:51.844 "is_configured": true, 00:36:51.844 "data_offset": 2048, 00:36:51.844 "data_size": 63488 00:36:51.844 } 00:36:51.844 ] 00:36:51.844 }' 00:36:51.844 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:51.844 01:03:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.783 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:52.783 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:52.783 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:52.783 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:52.783 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:52.783 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:52.783 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:52.783 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:52.783 "name": "raid_bdev1", 00:36:52.783 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:52.783 "strip_size_kb": 64, 00:36:52.783 "state": "online", 00:36:52.783 "raid_level": "raid5f", 00:36:52.783 "superblock": true, 00:36:52.783 "num_base_bdevs": 4, 00:36:52.783 "num_base_bdevs_discovered": 3, 00:36:52.783 "num_base_bdevs_operational": 3, 00:36:52.783 "base_bdevs_list": [ 00:36:52.783 { 00:36:52.783 "name": null, 00:36:52.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:52.783 "is_configured": false, 00:36:52.783 "data_offset": 2048, 00:36:52.783 "data_size": 63488 00:36:52.783 }, 00:36:52.783 { 00:36:52.783 "name": "BaseBdev2", 00:36:52.783 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:52.783 "is_configured": true, 00:36:52.783 "data_offset": 2048, 00:36:52.783 "data_size": 63488 00:36:52.783 }, 00:36:52.783 { 00:36:52.783 "name": "BaseBdev3", 00:36:52.783 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:52.783 "is_configured": true, 00:36:52.783 "data_offset": 2048, 00:36:52.783 "data_size": 63488 00:36:52.783 }, 00:36:52.783 { 00:36:52.783 "name": "BaseBdev4", 00:36:52.783 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:52.783 "is_configured": true, 00:36:52.783 "data_offset": 2048, 00:36:52.783 "data_size": 63488 00:36:52.783 } 00:36:52.783 ] 00:36:52.783 }' 00:36:52.783 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:52.783 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:52.783 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:52.784 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:53.043 [2024-07-25 01:03:15.598243] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:53.043 [2024-07-25 01:03:15.598391] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:53.043 [2024-07-25 01:03:15.598404] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:53.043 request: 00:36:53.043 { 00:36:53.043 "base_bdev": "BaseBdev1", 00:36:53.043 "raid_bdev": "raid_bdev1", 00:36:53.043 "method": "bdev_raid_add_base_bdev", 00:36:53.043 "req_id": 1 00:36:53.043 } 00:36:53.043 Got JSON-RPC error response 00:36:53.043 response: 00:36:53.043 { 00:36:53.043 "code": -22, 00:36:53.043 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:53.043 } 00:36:53.043 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:36:53.043 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:53.043 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:53.043 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:53.043 01:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:53.978 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:54.237 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:54.237 "name": "raid_bdev1", 00:36:54.237 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:54.237 "strip_size_kb": 64, 00:36:54.237 "state": "online", 00:36:54.237 "raid_level": "raid5f", 00:36:54.237 "superblock": true, 00:36:54.237 "num_base_bdevs": 4, 00:36:54.237 "num_base_bdevs_discovered": 3, 00:36:54.237 "num_base_bdevs_operational": 3, 00:36:54.237 "base_bdevs_list": [ 00:36:54.237 { 00:36:54.237 "name": null, 00:36:54.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.237 "is_configured": false, 00:36:54.237 "data_offset": 2048, 00:36:54.237 "data_size": 63488 00:36:54.237 }, 00:36:54.237 { 00:36:54.237 "name": "BaseBdev2", 00:36:54.237 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:54.238 "is_configured": true, 00:36:54.238 "data_offset": 2048, 00:36:54.238 "data_size": 63488 00:36:54.238 }, 00:36:54.238 { 00:36:54.238 "name": "BaseBdev3", 00:36:54.238 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:54.238 "is_configured": true, 00:36:54.238 "data_offset": 2048, 00:36:54.238 "data_size": 63488 00:36:54.238 }, 00:36:54.238 { 00:36:54.238 "name": "BaseBdev4", 00:36:54.238 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:54.238 "is_configured": true, 00:36:54.238 "data_offset": 2048, 00:36:54.238 "data_size": 63488 00:36:54.238 } 00:36:54.238 ] 00:36:54.238 }' 00:36:54.238 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:54.238 01:03:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.805 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:54.805 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:54.805 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:54.805 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:54.805 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:54.805 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:54.805 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:55.064 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:55.064 "name": "raid_bdev1", 00:36:55.064 "uuid": "17e960e0-45c0-4d50-9d62-dc5d991acfa0", 00:36:55.064 "strip_size_kb": 64, 00:36:55.064 "state": "online", 00:36:55.064 "raid_level": "raid5f", 00:36:55.064 "superblock": true, 00:36:55.064 "num_base_bdevs": 4, 00:36:55.064 "num_base_bdevs_discovered": 3, 00:36:55.064 "num_base_bdevs_operational": 3, 00:36:55.064 "base_bdevs_list": [ 00:36:55.064 { 00:36:55.064 "name": null, 00:36:55.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:55.064 "is_configured": false, 00:36:55.064 "data_offset": 2048, 00:36:55.064 "data_size": 63488 00:36:55.064 }, 00:36:55.064 { 00:36:55.064 "name": "BaseBdev2", 00:36:55.064 "uuid": "99d77757-43fe-5c21-bd82-0966054d7175", 00:36:55.064 "is_configured": true, 00:36:55.064 "data_offset": 2048, 00:36:55.064 "data_size": 63488 00:36:55.064 }, 00:36:55.064 { 00:36:55.064 "name": "BaseBdev3", 00:36:55.064 "uuid": "51bc04eb-830d-5a1c-ae10-8590026df6a4", 00:36:55.064 "is_configured": true, 00:36:55.064 "data_offset": 2048, 00:36:55.064 "data_size": 63488 00:36:55.064 }, 00:36:55.064 { 00:36:55.064 "name": "BaseBdev4", 00:36:55.064 "uuid": "f8813b36-8d0d-5425-99b1-c699836b82c7", 00:36:55.064 "is_configured": true, 00:36:55.064 "data_offset": 2048, 00:36:55.064 "data_size": 63488 00:36:55.064 } 00:36:55.064 ] 00:36:55.064 }' 00:36:55.064 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:55.064 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:55.064 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:55.323 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:55.323 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 158128 00:36:55.323 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 158128 ']' 00:36:55.323 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 158128 00:36:55.323 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:36:55.323 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:55.323 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 158128 00:36:55.323 killing process with pid 158128 00:36:55.323 Received shutdown signal, test time was about 60.000000 seconds 00:36:55.323 00:36:55.323 Latency(us) 00:36:55.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.323 =================================================================================================================== 00:36:55.323 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:55.323 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:55.323 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:55.324 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 158128' 00:36:55.324 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 158128 00:36:55.324 01:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 158128 00:36:55.324 [2024-07-25 01:03:17.748500] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:55.324 [2024-07-25 01:03:17.748603] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:55.324 [2024-07-25 01:03:17.748670] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:55.324 [2024-07-25 01:03:17.748679] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:36:55.583 [2024-07-25 01:03:18.198573] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:56.960 01:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:36:56.960 00:36:56.960 real 0m38.627s 00:36:56.960 user 0m57.532s 00:36:56.960 sys 0m4.856s 00:36:56.960 ************************************ 00:36:56.960 END TEST raid5f_rebuild_test_sb 00:36:56.960 ************************************ 00:36:56.960 01:03:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:56.960 01:03:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:56.960 01:03:19 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:36:56.960 01:03:19 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:36:56.960 01:03:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:36:56.960 01:03:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:56.960 01:03:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:56.960 ************************************ 00:36:56.960 START TEST raid_state_function_test_sb_4k 00:36:56.960 ************************************ 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=159118 00:36:56.960 Process raid pid: 159118 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 159118' 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 159118 /var/tmp/spdk-raid.sock 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 159118 ']' 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:56.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:56.960 01:03:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:56.960 [2024-07-25 01:03:19.564969] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:36:56.960 [2024-07-25 01:03:19.565203] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:57.218 [2024-07-25 01:03:19.743540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.477 [2024-07-25 01:03:19.936598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.735 [2024-07-25 01:03:20.144202] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:57.993 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:57.993 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:36:57.993 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:58.253 [2024-07-25 01:03:20.726928] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:58.253 [2024-07-25 01:03:20.727025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:58.253 [2024-07-25 01:03:20.727036] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:58.253 [2024-07-25 01:03:20.727061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:58.253 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.512 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:58.512 "name": "Existed_Raid", 00:36:58.512 "uuid": "395d6374-d769-4519-920d-e69e63ae075f", 00:36:58.512 "strip_size_kb": 0, 00:36:58.512 "state": "configuring", 00:36:58.512 "raid_level": "raid1", 00:36:58.512 "superblock": true, 00:36:58.512 "num_base_bdevs": 2, 00:36:58.512 "num_base_bdevs_discovered": 0, 00:36:58.512 "num_base_bdevs_operational": 2, 00:36:58.512 "base_bdevs_list": [ 00:36:58.512 { 00:36:58.512 "name": "BaseBdev1", 00:36:58.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:58.512 "is_configured": false, 00:36:58.512 "data_offset": 0, 00:36:58.512 "data_size": 0 00:36:58.512 }, 00:36:58.512 { 00:36:58.512 "name": "BaseBdev2", 00:36:58.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:58.512 "is_configured": false, 00:36:58.512 "data_offset": 0, 00:36:58.512 "data_size": 0 00:36:58.512 } 00:36:58.512 ] 00:36:58.512 }' 00:36:58.512 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:58.512 01:03:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:59.080 01:03:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:59.080 [2024-07-25 01:03:21.663004] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:59.080 [2024-07-25 01:03:21.663044] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:36:59.080 01:03:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:59.338 [2024-07-25 01:03:21.919099] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:59.338 [2024-07-25 01:03:21.919154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:59.338 [2024-07-25 01:03:21.919164] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:59.338 [2024-07-25 01:03:21.919187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:59.338 01:03:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:36:59.598 [2024-07-25 01:03:22.221611] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:59.598 BaseBdev1 00:36:59.598 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:36:59.598 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:36:59.598 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:59.598 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:36:59.598 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:59.598 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:59.598 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:59.856 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:00.115 [ 00:37:00.115 { 00:37:00.115 "name": "BaseBdev1", 00:37:00.115 "aliases": [ 00:37:00.115 "9ff64058-8d1a-49e4-8db9-ca2ade54104c" 00:37:00.115 ], 00:37:00.115 "product_name": "Malloc disk", 00:37:00.115 "block_size": 4096, 00:37:00.115 "num_blocks": 8192, 00:37:00.115 "uuid": "9ff64058-8d1a-49e4-8db9-ca2ade54104c", 00:37:00.115 "assigned_rate_limits": { 00:37:00.115 "rw_ios_per_sec": 0, 00:37:00.115 "rw_mbytes_per_sec": 0, 00:37:00.115 "r_mbytes_per_sec": 0, 00:37:00.115 "w_mbytes_per_sec": 0 00:37:00.115 }, 00:37:00.115 "claimed": true, 00:37:00.115 "claim_type": "exclusive_write", 00:37:00.115 "zoned": false, 00:37:00.115 "supported_io_types": { 00:37:00.115 "read": true, 00:37:00.115 "write": true, 00:37:00.115 "unmap": true, 00:37:00.115 "flush": true, 00:37:00.115 "reset": true, 00:37:00.115 "nvme_admin": false, 00:37:00.115 "nvme_io": false, 00:37:00.115 "nvme_io_md": false, 00:37:00.115 "write_zeroes": true, 00:37:00.115 "zcopy": true, 00:37:00.115 "get_zone_info": false, 00:37:00.115 "zone_management": false, 00:37:00.115 "zone_append": false, 00:37:00.115 "compare": false, 00:37:00.115 "compare_and_write": false, 00:37:00.115 "abort": true, 00:37:00.115 "seek_hole": false, 00:37:00.115 "seek_data": false, 00:37:00.115 "copy": true, 00:37:00.115 "nvme_iov_md": false 00:37:00.115 }, 00:37:00.115 "memory_domains": [ 00:37:00.115 { 00:37:00.115 "dma_device_id": "system", 00:37:00.115 "dma_device_type": 1 00:37:00.115 }, 00:37:00.115 { 00:37:00.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:00.115 "dma_device_type": 2 00:37:00.115 } 00:37:00.115 ], 00:37:00.115 "driver_specific": {} 00:37:00.115 } 00:37:00.115 ] 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:00.115 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:00.374 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:00.374 "name": "Existed_Raid", 00:37:00.374 "uuid": "72094f3d-a597-4c9d-9cb7-6f9cc3f5ee09", 00:37:00.374 "strip_size_kb": 0, 00:37:00.374 "state": "configuring", 00:37:00.374 "raid_level": "raid1", 00:37:00.374 "superblock": true, 00:37:00.374 "num_base_bdevs": 2, 00:37:00.374 "num_base_bdevs_discovered": 1, 00:37:00.374 "num_base_bdevs_operational": 2, 00:37:00.374 "base_bdevs_list": [ 00:37:00.374 { 00:37:00.374 "name": "BaseBdev1", 00:37:00.374 "uuid": "9ff64058-8d1a-49e4-8db9-ca2ade54104c", 00:37:00.374 "is_configured": true, 00:37:00.374 "data_offset": 256, 00:37:00.374 "data_size": 7936 00:37:00.374 }, 00:37:00.374 { 00:37:00.374 "name": "BaseBdev2", 00:37:00.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.374 "is_configured": false, 00:37:00.374 "data_offset": 0, 00:37:00.374 "data_size": 0 00:37:00.374 } 00:37:00.374 ] 00:37:00.374 }' 00:37:00.374 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:00.374 01:03:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:00.940 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:01.229 [2024-07-25 01:03:23.633894] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:01.229 [2024-07-25 01:03:23.633951] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:01.229 [2024-07-25 01:03:23.813966] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:01.229 [2024-07-25 01:03:23.815918] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:01.229 [2024-07-25 01:03:23.815994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:01.229 01:03:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:01.511 01:03:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:01.511 "name": "Existed_Raid", 00:37:01.511 "uuid": "5598807d-308a-4ec8-b8a0-da973941a66c", 00:37:01.511 "strip_size_kb": 0, 00:37:01.511 "state": "configuring", 00:37:01.511 "raid_level": "raid1", 00:37:01.511 "superblock": true, 00:37:01.511 "num_base_bdevs": 2, 00:37:01.511 "num_base_bdevs_discovered": 1, 00:37:01.511 "num_base_bdevs_operational": 2, 00:37:01.511 "base_bdevs_list": [ 00:37:01.511 { 00:37:01.511 "name": "BaseBdev1", 00:37:01.511 "uuid": "9ff64058-8d1a-49e4-8db9-ca2ade54104c", 00:37:01.511 "is_configured": true, 00:37:01.511 "data_offset": 256, 00:37:01.511 "data_size": 7936 00:37:01.511 }, 00:37:01.511 { 00:37:01.511 "name": "BaseBdev2", 00:37:01.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:01.511 "is_configured": false, 00:37:01.511 "data_offset": 0, 00:37:01.511 "data_size": 0 00:37:01.511 } 00:37:01.512 ] 00:37:01.512 }' 00:37:01.512 01:03:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:01.512 01:03:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:02.079 01:03:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:37:02.338 [2024-07-25 01:03:24.865670] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:02.338 [2024-07-25 01:03:24.865894] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:37:02.338 [2024-07-25 01:03:24.865919] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:02.338 [2024-07-25 01:03:24.866040] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:37:02.338 [2024-07-25 01:03:24.866397] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:37:02.338 [2024-07-25 01:03:24.866411] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:37:02.338 [2024-07-25 01:03:24.866562] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:02.338 BaseBdev2 00:37:02.338 01:03:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:37:02.338 01:03:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:37:02.338 01:03:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:02.338 01:03:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:37:02.338 01:03:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:02.338 01:03:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:02.338 01:03:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:02.597 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:02.855 [ 00:37:02.855 { 00:37:02.855 "name": "BaseBdev2", 00:37:02.855 "aliases": [ 00:37:02.855 "1a63c686-18f7-418e-9504-747ea17eed6c" 00:37:02.855 ], 00:37:02.855 "product_name": "Malloc disk", 00:37:02.855 "block_size": 4096, 00:37:02.855 "num_blocks": 8192, 00:37:02.855 "uuid": "1a63c686-18f7-418e-9504-747ea17eed6c", 00:37:02.855 "assigned_rate_limits": { 00:37:02.855 "rw_ios_per_sec": 0, 00:37:02.855 "rw_mbytes_per_sec": 0, 00:37:02.855 "r_mbytes_per_sec": 0, 00:37:02.855 "w_mbytes_per_sec": 0 00:37:02.855 }, 00:37:02.855 "claimed": true, 00:37:02.855 "claim_type": "exclusive_write", 00:37:02.855 "zoned": false, 00:37:02.855 "supported_io_types": { 00:37:02.855 "read": true, 00:37:02.855 "write": true, 00:37:02.855 "unmap": true, 00:37:02.855 "flush": true, 00:37:02.855 "reset": true, 00:37:02.855 "nvme_admin": false, 00:37:02.855 "nvme_io": false, 00:37:02.855 "nvme_io_md": false, 00:37:02.855 "write_zeroes": true, 00:37:02.855 "zcopy": true, 00:37:02.855 "get_zone_info": false, 00:37:02.855 "zone_management": false, 00:37:02.855 "zone_append": false, 00:37:02.855 "compare": false, 00:37:02.855 "compare_and_write": false, 00:37:02.855 "abort": true, 00:37:02.855 "seek_hole": false, 00:37:02.855 "seek_data": false, 00:37:02.855 "copy": true, 00:37:02.855 "nvme_iov_md": false 00:37:02.855 }, 00:37:02.855 "memory_domains": [ 00:37:02.855 { 00:37:02.855 "dma_device_id": "system", 00:37:02.855 "dma_device_type": 1 00:37:02.855 }, 00:37:02.855 { 00:37:02.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:02.855 "dma_device_type": 2 00:37:02.855 } 00:37:02.855 ], 00:37:02.855 "driver_specific": {} 00:37:02.855 } 00:37:02.855 ] 00:37:02.855 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:37:02.855 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:37:02.855 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:02.855 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:37:02.855 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:02.855 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:02.855 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:02.855 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:02.856 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:02.856 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:02.856 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:02.856 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:02.856 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:02.856 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.856 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:03.114 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:03.114 "name": "Existed_Raid", 00:37:03.114 "uuid": "5598807d-308a-4ec8-b8a0-da973941a66c", 00:37:03.114 "strip_size_kb": 0, 00:37:03.114 "state": "online", 00:37:03.114 "raid_level": "raid1", 00:37:03.114 "superblock": true, 00:37:03.114 "num_base_bdevs": 2, 00:37:03.114 "num_base_bdevs_discovered": 2, 00:37:03.114 "num_base_bdevs_operational": 2, 00:37:03.114 "base_bdevs_list": [ 00:37:03.114 { 00:37:03.114 "name": "BaseBdev1", 00:37:03.114 "uuid": "9ff64058-8d1a-49e4-8db9-ca2ade54104c", 00:37:03.114 "is_configured": true, 00:37:03.114 "data_offset": 256, 00:37:03.114 "data_size": 7936 00:37:03.114 }, 00:37:03.114 { 00:37:03.114 "name": "BaseBdev2", 00:37:03.114 "uuid": "1a63c686-18f7-418e-9504-747ea17eed6c", 00:37:03.114 "is_configured": true, 00:37:03.114 "data_offset": 256, 00:37:03.114 "data_size": 7936 00:37:03.114 } 00:37:03.114 ] 00:37:03.114 }' 00:37:03.114 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:03.114 01:03:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:03.681 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:37:03.681 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:37:03.681 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:03.681 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:03.681 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:03.681 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:37:03.681 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:37:03.681 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:03.939 [2024-07-25 01:03:26.386207] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:03.939 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:03.939 "name": "Existed_Raid", 00:37:03.939 "aliases": [ 00:37:03.939 "5598807d-308a-4ec8-b8a0-da973941a66c" 00:37:03.939 ], 00:37:03.939 "product_name": "Raid Volume", 00:37:03.939 "block_size": 4096, 00:37:03.939 "num_blocks": 7936, 00:37:03.939 "uuid": "5598807d-308a-4ec8-b8a0-da973941a66c", 00:37:03.939 "assigned_rate_limits": { 00:37:03.939 "rw_ios_per_sec": 0, 00:37:03.939 "rw_mbytes_per_sec": 0, 00:37:03.939 "r_mbytes_per_sec": 0, 00:37:03.939 "w_mbytes_per_sec": 0 00:37:03.939 }, 00:37:03.939 "claimed": false, 00:37:03.939 "zoned": false, 00:37:03.939 "supported_io_types": { 00:37:03.939 "read": true, 00:37:03.939 "write": true, 00:37:03.939 "unmap": false, 00:37:03.939 "flush": false, 00:37:03.939 "reset": true, 00:37:03.939 "nvme_admin": false, 00:37:03.939 "nvme_io": false, 00:37:03.939 "nvme_io_md": false, 00:37:03.939 "write_zeroes": true, 00:37:03.939 "zcopy": false, 00:37:03.939 "get_zone_info": false, 00:37:03.939 "zone_management": false, 00:37:03.939 "zone_append": false, 00:37:03.939 "compare": false, 00:37:03.939 "compare_and_write": false, 00:37:03.939 "abort": false, 00:37:03.939 "seek_hole": false, 00:37:03.939 "seek_data": false, 00:37:03.939 "copy": false, 00:37:03.939 "nvme_iov_md": false 00:37:03.939 }, 00:37:03.939 "memory_domains": [ 00:37:03.939 { 00:37:03.939 "dma_device_id": "system", 00:37:03.939 "dma_device_type": 1 00:37:03.939 }, 00:37:03.939 { 00:37:03.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:03.939 "dma_device_type": 2 00:37:03.939 }, 00:37:03.939 { 00:37:03.939 "dma_device_id": "system", 00:37:03.939 "dma_device_type": 1 00:37:03.939 }, 00:37:03.939 { 00:37:03.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:03.939 "dma_device_type": 2 00:37:03.939 } 00:37:03.939 ], 00:37:03.939 "driver_specific": { 00:37:03.939 "raid": { 00:37:03.939 "uuid": "5598807d-308a-4ec8-b8a0-da973941a66c", 00:37:03.939 "strip_size_kb": 0, 00:37:03.939 "state": "online", 00:37:03.939 "raid_level": "raid1", 00:37:03.939 "superblock": true, 00:37:03.939 "num_base_bdevs": 2, 00:37:03.940 "num_base_bdevs_discovered": 2, 00:37:03.940 "num_base_bdevs_operational": 2, 00:37:03.940 "base_bdevs_list": [ 00:37:03.940 { 00:37:03.940 "name": "BaseBdev1", 00:37:03.940 "uuid": "9ff64058-8d1a-49e4-8db9-ca2ade54104c", 00:37:03.940 "is_configured": true, 00:37:03.940 "data_offset": 256, 00:37:03.940 "data_size": 7936 00:37:03.940 }, 00:37:03.940 { 00:37:03.940 "name": "BaseBdev2", 00:37:03.940 "uuid": "1a63c686-18f7-418e-9504-747ea17eed6c", 00:37:03.940 "is_configured": true, 00:37:03.940 "data_offset": 256, 00:37:03.940 "data_size": 7936 00:37:03.940 } 00:37:03.940 ] 00:37:03.940 } 00:37:03.940 } 00:37:03.940 }' 00:37:03.940 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:03.940 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:37:03.940 BaseBdev2' 00:37:03.940 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:03.940 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:03.940 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:37:04.198 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:04.198 "name": "BaseBdev1", 00:37:04.198 "aliases": [ 00:37:04.198 "9ff64058-8d1a-49e4-8db9-ca2ade54104c" 00:37:04.198 ], 00:37:04.198 "product_name": "Malloc disk", 00:37:04.198 "block_size": 4096, 00:37:04.198 "num_blocks": 8192, 00:37:04.198 "uuid": "9ff64058-8d1a-49e4-8db9-ca2ade54104c", 00:37:04.198 "assigned_rate_limits": { 00:37:04.198 "rw_ios_per_sec": 0, 00:37:04.198 "rw_mbytes_per_sec": 0, 00:37:04.198 "r_mbytes_per_sec": 0, 00:37:04.198 "w_mbytes_per_sec": 0 00:37:04.198 }, 00:37:04.198 "claimed": true, 00:37:04.198 "claim_type": "exclusive_write", 00:37:04.198 "zoned": false, 00:37:04.198 "supported_io_types": { 00:37:04.198 "read": true, 00:37:04.198 "write": true, 00:37:04.198 "unmap": true, 00:37:04.198 "flush": true, 00:37:04.198 "reset": true, 00:37:04.198 "nvme_admin": false, 00:37:04.198 "nvme_io": false, 00:37:04.198 "nvme_io_md": false, 00:37:04.198 "write_zeroes": true, 00:37:04.198 "zcopy": true, 00:37:04.198 "get_zone_info": false, 00:37:04.198 "zone_management": false, 00:37:04.198 "zone_append": false, 00:37:04.198 "compare": false, 00:37:04.198 "compare_and_write": false, 00:37:04.198 "abort": true, 00:37:04.198 "seek_hole": false, 00:37:04.198 "seek_data": false, 00:37:04.198 "copy": true, 00:37:04.198 "nvme_iov_md": false 00:37:04.198 }, 00:37:04.198 "memory_domains": [ 00:37:04.198 { 00:37:04.198 "dma_device_id": "system", 00:37:04.198 "dma_device_type": 1 00:37:04.198 }, 00:37:04.198 { 00:37:04.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:04.198 "dma_device_type": 2 00:37:04.198 } 00:37:04.198 ], 00:37:04.198 "driver_specific": {} 00:37:04.198 }' 00:37:04.198 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:04.198 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:04.198 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:04.198 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:04.198 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:04.457 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:04.457 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:04.457 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:04.457 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:04.457 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:04.457 01:03:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:04.457 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:04.457 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:04.457 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:04.457 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:37:04.715 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:04.715 "name": "BaseBdev2", 00:37:04.715 "aliases": [ 00:37:04.715 "1a63c686-18f7-418e-9504-747ea17eed6c" 00:37:04.715 ], 00:37:04.715 "product_name": "Malloc disk", 00:37:04.715 "block_size": 4096, 00:37:04.715 "num_blocks": 8192, 00:37:04.715 "uuid": "1a63c686-18f7-418e-9504-747ea17eed6c", 00:37:04.715 "assigned_rate_limits": { 00:37:04.715 "rw_ios_per_sec": 0, 00:37:04.715 "rw_mbytes_per_sec": 0, 00:37:04.715 "r_mbytes_per_sec": 0, 00:37:04.715 "w_mbytes_per_sec": 0 00:37:04.715 }, 00:37:04.715 "claimed": true, 00:37:04.715 "claim_type": "exclusive_write", 00:37:04.715 "zoned": false, 00:37:04.715 "supported_io_types": { 00:37:04.715 "read": true, 00:37:04.715 "write": true, 00:37:04.715 "unmap": true, 00:37:04.715 "flush": true, 00:37:04.715 "reset": true, 00:37:04.715 "nvme_admin": false, 00:37:04.715 "nvme_io": false, 00:37:04.715 "nvme_io_md": false, 00:37:04.715 "write_zeroes": true, 00:37:04.715 "zcopy": true, 00:37:04.715 "get_zone_info": false, 00:37:04.715 "zone_management": false, 00:37:04.715 "zone_append": false, 00:37:04.715 "compare": false, 00:37:04.715 "compare_and_write": false, 00:37:04.715 "abort": true, 00:37:04.715 "seek_hole": false, 00:37:04.715 "seek_data": false, 00:37:04.715 "copy": true, 00:37:04.715 "nvme_iov_md": false 00:37:04.715 }, 00:37:04.715 "memory_domains": [ 00:37:04.715 { 00:37:04.715 "dma_device_id": "system", 00:37:04.715 "dma_device_type": 1 00:37:04.715 }, 00:37:04.715 { 00:37:04.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:04.715 "dma_device_type": 2 00:37:04.715 } 00:37:04.715 ], 00:37:04.715 "driver_specific": {} 00:37:04.715 }' 00:37:04.715 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:04.715 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:04.974 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:04.974 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:04.974 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:04.974 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:04.974 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:04.974 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:04.974 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:04.974 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:05.233 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:05.233 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:05.233 01:03:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:05.492 [2024-07-25 01:03:27.906381] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:05.492 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:05.750 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:05.750 "name": "Existed_Raid", 00:37:05.750 "uuid": "5598807d-308a-4ec8-b8a0-da973941a66c", 00:37:05.750 "strip_size_kb": 0, 00:37:05.750 "state": "online", 00:37:05.750 "raid_level": "raid1", 00:37:05.750 "superblock": true, 00:37:05.750 "num_base_bdevs": 2, 00:37:05.750 "num_base_bdevs_discovered": 1, 00:37:05.750 "num_base_bdevs_operational": 1, 00:37:05.750 "base_bdevs_list": [ 00:37:05.750 { 00:37:05.750 "name": null, 00:37:05.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:05.751 "is_configured": false, 00:37:05.751 "data_offset": 256, 00:37:05.751 "data_size": 7936 00:37:05.751 }, 00:37:05.751 { 00:37:05.751 "name": "BaseBdev2", 00:37:05.751 "uuid": "1a63c686-18f7-418e-9504-747ea17eed6c", 00:37:05.751 "is_configured": true, 00:37:05.751 "data_offset": 256, 00:37:05.751 "data_size": 7936 00:37:05.751 } 00:37:05.751 ] 00:37:05.751 }' 00:37:05.751 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:05.751 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:06.318 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:37:06.318 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:06.318 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:06.318 01:03:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:37:06.577 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:37:06.577 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:06.577 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:37:06.835 [2024-07-25 01:03:29.394046] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:06.835 [2024-07-25 01:03:29.394151] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:07.094 [2024-07-25 01:03:29.496541] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:07.094 [2024-07-25 01:03:29.496586] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:07.094 [2024-07-25 01:03:29.496595] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:37:07.094 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:37:07.094 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:07.094 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:37:07.094 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 159118 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 159118 ']' 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 159118 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 159118 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:07.353 killing process with pid 159118 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 159118' 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 159118 00:37:07.353 [2024-07-25 01:03:29.793298] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:07.353 01:03:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 159118 00:37:07.353 [2024-07-25 01:03:29.793416] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:08.728 ************************************ 00:37:08.728 END TEST raid_state_function_test_sb_4k 00:37:08.728 ************************************ 00:37:08.728 01:03:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:37:08.728 00:37:08.728 real 0m11.635s 00:37:08.728 user 0m19.813s 00:37:08.728 sys 0m1.691s 00:37:08.728 01:03:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:08.728 01:03:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:08.728 01:03:31 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:37:08.728 01:03:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:37:08.728 01:03:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:08.728 01:03:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:08.728 ************************************ 00:37:08.728 START TEST raid_superblock_test_4k 00:37:08.728 ************************************ 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=159495 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 159495 /var/tmp/spdk-raid.sock 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 159495 ']' 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:08.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:08.728 01:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:08.728 [2024-07-25 01:03:31.272802] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:37:08.729 [2024-07-25 01:03:31.273040] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159495 ] 00:37:08.986 [2024-07-25 01:03:31.451576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.986 [2024-07-25 01:03:31.636951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.245 [2024-07-25 01:03:31.844894] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:09.813 01:03:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:09.813 01:03:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:37:09.813 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:37:09.813 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:09.813 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:37:09.813 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:37:09.813 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:09.813 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:09.813 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:09.813 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:09.813 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:37:10.071 malloc1 00:37:10.071 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:10.071 [2024-07-25 01:03:32.660151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:10.071 [2024-07-25 01:03:32.660251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:10.071 [2024-07-25 01:03:32.660296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:37:10.071 [2024-07-25 01:03:32.660322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:10.071 [2024-07-25 01:03:32.662626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:10.071 [2024-07-25 01:03:32.662679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:10.071 pt1 00:37:10.071 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:10.071 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:10.071 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:37:10.071 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:37:10.071 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:10.071 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:10.071 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:10.071 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:10.071 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:37:10.330 malloc2 00:37:10.330 01:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:10.589 [2024-07-25 01:03:33.130948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:10.589 [2024-07-25 01:03:33.131062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:10.589 [2024-07-25 01:03:33.131098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:37:10.589 [2024-07-25 01:03:33.131121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:10.589 [2024-07-25 01:03:33.133371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:10.589 [2024-07-25 01:03:33.133423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:10.589 pt2 00:37:10.589 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:10.589 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:10.589 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:37:10.850 [2024-07-25 01:03:33.319033] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:10.850 [2024-07-25 01:03:33.320985] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:10.850 [2024-07-25 01:03:33.321186] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:37:10.850 [2024-07-25 01:03:33.321198] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:10.850 [2024-07-25 01:03:33.321331] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:37:10.850 [2024-07-25 01:03:33.321674] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:37:10.850 [2024-07-25 01:03:33.321694] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:37:10.850 [2024-07-25 01:03:33.321847] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:10.850 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:11.109 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:11.109 "name": "raid_bdev1", 00:37:11.109 "uuid": "b5ab5c0d-5809-4450-83fd-bf695476b59d", 00:37:11.109 "strip_size_kb": 0, 00:37:11.109 "state": "online", 00:37:11.109 "raid_level": "raid1", 00:37:11.109 "superblock": true, 00:37:11.109 "num_base_bdevs": 2, 00:37:11.109 "num_base_bdevs_discovered": 2, 00:37:11.109 "num_base_bdevs_operational": 2, 00:37:11.109 "base_bdevs_list": [ 00:37:11.109 { 00:37:11.109 "name": "pt1", 00:37:11.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:11.109 "is_configured": true, 00:37:11.109 "data_offset": 256, 00:37:11.109 "data_size": 7936 00:37:11.109 }, 00:37:11.109 { 00:37:11.109 "name": "pt2", 00:37:11.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:11.109 "is_configured": true, 00:37:11.109 "data_offset": 256, 00:37:11.109 "data_size": 7936 00:37:11.109 } 00:37:11.109 ] 00:37:11.109 }' 00:37:11.109 01:03:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:11.109 01:03:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:11.728 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:37:11.728 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:11.728 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:11.728 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:11.728 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:11.728 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:37:11.728 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:11.728 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:11.728 [2024-07-25 01:03:34.351388] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:11.988 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:11.988 "name": "raid_bdev1", 00:37:11.988 "aliases": [ 00:37:11.988 "b5ab5c0d-5809-4450-83fd-bf695476b59d" 00:37:11.988 ], 00:37:11.988 "product_name": "Raid Volume", 00:37:11.988 "block_size": 4096, 00:37:11.988 "num_blocks": 7936, 00:37:11.988 "uuid": "b5ab5c0d-5809-4450-83fd-bf695476b59d", 00:37:11.988 "assigned_rate_limits": { 00:37:11.988 "rw_ios_per_sec": 0, 00:37:11.988 "rw_mbytes_per_sec": 0, 00:37:11.988 "r_mbytes_per_sec": 0, 00:37:11.988 "w_mbytes_per_sec": 0 00:37:11.988 }, 00:37:11.988 "claimed": false, 00:37:11.988 "zoned": false, 00:37:11.988 "supported_io_types": { 00:37:11.988 "read": true, 00:37:11.988 "write": true, 00:37:11.988 "unmap": false, 00:37:11.988 "flush": false, 00:37:11.988 "reset": true, 00:37:11.988 "nvme_admin": false, 00:37:11.988 "nvme_io": false, 00:37:11.988 "nvme_io_md": false, 00:37:11.988 "write_zeroes": true, 00:37:11.988 "zcopy": false, 00:37:11.988 "get_zone_info": false, 00:37:11.988 "zone_management": false, 00:37:11.988 "zone_append": false, 00:37:11.988 "compare": false, 00:37:11.988 "compare_and_write": false, 00:37:11.988 "abort": false, 00:37:11.988 "seek_hole": false, 00:37:11.988 "seek_data": false, 00:37:11.988 "copy": false, 00:37:11.988 "nvme_iov_md": false 00:37:11.988 }, 00:37:11.988 "memory_domains": [ 00:37:11.988 { 00:37:11.988 "dma_device_id": "system", 00:37:11.988 "dma_device_type": 1 00:37:11.988 }, 00:37:11.988 { 00:37:11.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:11.988 "dma_device_type": 2 00:37:11.988 }, 00:37:11.988 { 00:37:11.988 "dma_device_id": "system", 00:37:11.988 "dma_device_type": 1 00:37:11.988 }, 00:37:11.988 { 00:37:11.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:11.988 "dma_device_type": 2 00:37:11.988 } 00:37:11.988 ], 00:37:11.988 "driver_specific": { 00:37:11.988 "raid": { 00:37:11.988 "uuid": "b5ab5c0d-5809-4450-83fd-bf695476b59d", 00:37:11.988 "strip_size_kb": 0, 00:37:11.988 "state": "online", 00:37:11.988 "raid_level": "raid1", 00:37:11.988 "superblock": true, 00:37:11.988 "num_base_bdevs": 2, 00:37:11.988 "num_base_bdevs_discovered": 2, 00:37:11.988 "num_base_bdevs_operational": 2, 00:37:11.988 "base_bdevs_list": [ 00:37:11.988 { 00:37:11.988 "name": "pt1", 00:37:11.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:11.988 "is_configured": true, 00:37:11.988 "data_offset": 256, 00:37:11.988 "data_size": 7936 00:37:11.988 }, 00:37:11.988 { 00:37:11.988 "name": "pt2", 00:37:11.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:11.988 "is_configured": true, 00:37:11.988 "data_offset": 256, 00:37:11.988 "data_size": 7936 00:37:11.988 } 00:37:11.988 ] 00:37:11.988 } 00:37:11.988 } 00:37:11.988 }' 00:37:11.988 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:11.988 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:11.988 pt2' 00:37:11.988 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:11.988 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:11.988 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:12.247 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:12.247 "name": "pt1", 00:37:12.247 "aliases": [ 00:37:12.247 "00000000-0000-0000-0000-000000000001" 00:37:12.247 ], 00:37:12.247 "product_name": "passthru", 00:37:12.247 "block_size": 4096, 00:37:12.247 "num_blocks": 8192, 00:37:12.247 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:12.247 "assigned_rate_limits": { 00:37:12.247 "rw_ios_per_sec": 0, 00:37:12.247 "rw_mbytes_per_sec": 0, 00:37:12.247 "r_mbytes_per_sec": 0, 00:37:12.247 "w_mbytes_per_sec": 0 00:37:12.247 }, 00:37:12.247 "claimed": true, 00:37:12.247 "claim_type": "exclusive_write", 00:37:12.247 "zoned": false, 00:37:12.247 "supported_io_types": { 00:37:12.247 "read": true, 00:37:12.247 "write": true, 00:37:12.247 "unmap": true, 00:37:12.247 "flush": true, 00:37:12.247 "reset": true, 00:37:12.247 "nvme_admin": false, 00:37:12.247 "nvme_io": false, 00:37:12.247 "nvme_io_md": false, 00:37:12.247 "write_zeroes": true, 00:37:12.247 "zcopy": true, 00:37:12.247 "get_zone_info": false, 00:37:12.247 "zone_management": false, 00:37:12.247 "zone_append": false, 00:37:12.247 "compare": false, 00:37:12.247 "compare_and_write": false, 00:37:12.247 "abort": true, 00:37:12.247 "seek_hole": false, 00:37:12.247 "seek_data": false, 00:37:12.247 "copy": true, 00:37:12.247 "nvme_iov_md": false 00:37:12.247 }, 00:37:12.247 "memory_domains": [ 00:37:12.247 { 00:37:12.247 "dma_device_id": "system", 00:37:12.247 "dma_device_type": 1 00:37:12.247 }, 00:37:12.247 { 00:37:12.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:12.247 "dma_device_type": 2 00:37:12.247 } 00:37:12.247 ], 00:37:12.247 "driver_specific": { 00:37:12.247 "passthru": { 00:37:12.247 "name": "pt1", 00:37:12.247 "base_bdev_name": "malloc1" 00:37:12.247 } 00:37:12.247 } 00:37:12.247 }' 00:37:12.247 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:12.247 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:12.247 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:12.247 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:12.247 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:12.247 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:12.247 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:12.247 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:12.506 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:12.506 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:12.506 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:12.506 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:12.506 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:12.506 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:12.506 01:03:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:12.765 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:12.766 "name": "pt2", 00:37:12.766 "aliases": [ 00:37:12.766 "00000000-0000-0000-0000-000000000002" 00:37:12.766 ], 00:37:12.766 "product_name": "passthru", 00:37:12.766 "block_size": 4096, 00:37:12.766 "num_blocks": 8192, 00:37:12.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:12.766 "assigned_rate_limits": { 00:37:12.766 "rw_ios_per_sec": 0, 00:37:12.766 "rw_mbytes_per_sec": 0, 00:37:12.766 "r_mbytes_per_sec": 0, 00:37:12.766 "w_mbytes_per_sec": 0 00:37:12.766 }, 00:37:12.766 "claimed": true, 00:37:12.766 "claim_type": "exclusive_write", 00:37:12.766 "zoned": false, 00:37:12.766 "supported_io_types": { 00:37:12.766 "read": true, 00:37:12.766 "write": true, 00:37:12.766 "unmap": true, 00:37:12.766 "flush": true, 00:37:12.766 "reset": true, 00:37:12.766 "nvme_admin": false, 00:37:12.766 "nvme_io": false, 00:37:12.766 "nvme_io_md": false, 00:37:12.766 "write_zeroes": true, 00:37:12.766 "zcopy": true, 00:37:12.766 "get_zone_info": false, 00:37:12.766 "zone_management": false, 00:37:12.766 "zone_append": false, 00:37:12.766 "compare": false, 00:37:12.766 "compare_and_write": false, 00:37:12.766 "abort": true, 00:37:12.766 "seek_hole": false, 00:37:12.766 "seek_data": false, 00:37:12.766 "copy": true, 00:37:12.766 "nvme_iov_md": false 00:37:12.766 }, 00:37:12.766 "memory_domains": [ 00:37:12.766 { 00:37:12.766 "dma_device_id": "system", 00:37:12.766 "dma_device_type": 1 00:37:12.766 }, 00:37:12.766 { 00:37:12.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:12.766 "dma_device_type": 2 00:37:12.766 } 00:37:12.766 ], 00:37:12.766 "driver_specific": { 00:37:12.766 "passthru": { 00:37:12.766 "name": "pt2", 00:37:12.766 "base_bdev_name": "malloc2" 00:37:12.766 } 00:37:12.766 } 00:37:12.766 }' 00:37:12.766 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:12.766 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:12.766 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:12.766 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:12.766 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:12.766 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:12.766 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:12.766 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:12.766 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:12.766 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:13.025 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:13.025 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:13.025 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:37:13.025 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:13.284 [2024-07-25 01:03:35.735635] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:13.284 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b5ab5c0d-5809-4450-83fd-bf695476b59d 00:37:13.284 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z b5ab5c0d-5809-4450-83fd-bf695476b59d ']' 00:37:13.284 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:13.284 [2024-07-25 01:03:35.931477] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:13.284 [2024-07-25 01:03:35.931503] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:13.284 [2024-07-25 01:03:35.931577] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:13.284 [2024-07-25 01:03:35.931637] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:13.284 [2024-07-25 01:03:35.931647] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:37:13.543 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:37:13.543 01:03:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:13.802 01:03:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:37:13.802 01:03:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:37:13.802 01:03:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:13.802 01:03:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:14.061 01:03:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:14.061 01:03:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:14.320 01:03:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:14.580 [2024-07-25 01:03:37.105749] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:14.580 [2024-07-25 01:03:37.107690] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:14.580 [2024-07-25 01:03:37.107761] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:14.580 [2024-07-25 01:03:37.107846] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:14.580 [2024-07-25 01:03:37.107873] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:14.580 [2024-07-25 01:03:37.107882] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:37:14.580 request: 00:37:14.580 { 00:37:14.580 "name": "raid_bdev1", 00:37:14.580 "raid_level": "raid1", 00:37:14.580 "base_bdevs": [ 00:37:14.580 "malloc1", 00:37:14.580 "malloc2" 00:37:14.580 ], 00:37:14.580 "superblock": false, 00:37:14.580 "method": "bdev_raid_create", 00:37:14.580 "req_id": 1 00:37:14.580 } 00:37:14.580 Got JSON-RPC error response 00:37:14.580 response: 00:37:14.580 { 00:37:14.580 "code": -17, 00:37:14.580 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:14.580 } 00:37:14.580 01:03:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:37:14.580 01:03:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:14.580 01:03:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:14.580 01:03:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:14.580 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:37:14.580 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:14.839 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:37:14.839 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:37:14.839 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:15.097 [2024-07-25 01:03:37.617767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:15.097 [2024-07-25 01:03:37.617851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:15.097 [2024-07-25 01:03:37.617900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:15.097 [2024-07-25 01:03:37.617927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:15.097 [2024-07-25 01:03:37.620273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:15.097 [2024-07-25 01:03:37.620341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:15.097 [2024-07-25 01:03:37.620454] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:15.097 [2024-07-25 01:03:37.620507] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:15.097 pt1 00:37:15.097 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:15.097 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:15.098 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:15.098 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:15.098 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:15.098 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:15.098 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:15.098 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:15.098 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:15.098 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:15.098 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:15.098 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:15.356 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:15.356 "name": "raid_bdev1", 00:37:15.356 "uuid": "b5ab5c0d-5809-4450-83fd-bf695476b59d", 00:37:15.356 "strip_size_kb": 0, 00:37:15.356 "state": "configuring", 00:37:15.356 "raid_level": "raid1", 00:37:15.356 "superblock": true, 00:37:15.356 "num_base_bdevs": 2, 00:37:15.356 "num_base_bdevs_discovered": 1, 00:37:15.356 "num_base_bdevs_operational": 2, 00:37:15.356 "base_bdevs_list": [ 00:37:15.356 { 00:37:15.356 "name": "pt1", 00:37:15.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:15.356 "is_configured": true, 00:37:15.356 "data_offset": 256, 00:37:15.356 "data_size": 7936 00:37:15.356 }, 00:37:15.356 { 00:37:15.356 "name": null, 00:37:15.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:15.356 "is_configured": false, 00:37:15.356 "data_offset": 256, 00:37:15.356 "data_size": 7936 00:37:15.356 } 00:37:15.356 ] 00:37:15.356 }' 00:37:15.356 01:03:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:15.356 01:03:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:15.922 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:37:15.922 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:37:15.922 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:15.922 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:16.181 [2024-07-25 01:03:38.593988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:16.181 [2024-07-25 01:03:38.594082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:16.181 [2024-07-25 01:03:38.594118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:16.181 [2024-07-25 01:03:38.594144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:16.181 [2024-07-25 01:03:38.594680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:16.181 [2024-07-25 01:03:38.594741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:16.181 [2024-07-25 01:03:38.594863] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:16.181 [2024-07-25 01:03:38.594886] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:16.181 [2024-07-25 01:03:38.595015] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:37:16.181 [2024-07-25 01:03:38.595026] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:16.181 [2024-07-25 01:03:38.595136] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:37:16.181 [2024-07-25 01:03:38.595461] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:37:16.181 [2024-07-25 01:03:38.595482] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:37:16.181 [2024-07-25 01:03:38.595656] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:16.181 pt2 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:16.181 "name": "raid_bdev1", 00:37:16.181 "uuid": "b5ab5c0d-5809-4450-83fd-bf695476b59d", 00:37:16.181 "strip_size_kb": 0, 00:37:16.181 "state": "online", 00:37:16.181 "raid_level": "raid1", 00:37:16.181 "superblock": true, 00:37:16.181 "num_base_bdevs": 2, 00:37:16.181 "num_base_bdevs_discovered": 2, 00:37:16.181 "num_base_bdevs_operational": 2, 00:37:16.181 "base_bdevs_list": [ 00:37:16.181 { 00:37:16.181 "name": "pt1", 00:37:16.181 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:16.181 "is_configured": true, 00:37:16.181 "data_offset": 256, 00:37:16.181 "data_size": 7936 00:37:16.181 }, 00:37:16.181 { 00:37:16.181 "name": "pt2", 00:37:16.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:16.181 "is_configured": true, 00:37:16.181 "data_offset": 256, 00:37:16.181 "data_size": 7936 00:37:16.181 } 00:37:16.181 ] 00:37:16.181 }' 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:16.181 01:03:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:16.747 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:37:16.747 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:16.747 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:16.747 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:16.747 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:16.747 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:37:16.747 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:16.747 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:17.006 [2024-07-25 01:03:39.650461] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:17.264 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:17.264 "name": "raid_bdev1", 00:37:17.264 "aliases": [ 00:37:17.264 "b5ab5c0d-5809-4450-83fd-bf695476b59d" 00:37:17.264 ], 00:37:17.264 "product_name": "Raid Volume", 00:37:17.264 "block_size": 4096, 00:37:17.264 "num_blocks": 7936, 00:37:17.264 "uuid": "b5ab5c0d-5809-4450-83fd-bf695476b59d", 00:37:17.264 "assigned_rate_limits": { 00:37:17.264 "rw_ios_per_sec": 0, 00:37:17.264 "rw_mbytes_per_sec": 0, 00:37:17.264 "r_mbytes_per_sec": 0, 00:37:17.264 "w_mbytes_per_sec": 0 00:37:17.264 }, 00:37:17.264 "claimed": false, 00:37:17.264 "zoned": false, 00:37:17.264 "supported_io_types": { 00:37:17.264 "read": true, 00:37:17.264 "write": true, 00:37:17.264 "unmap": false, 00:37:17.264 "flush": false, 00:37:17.264 "reset": true, 00:37:17.264 "nvme_admin": false, 00:37:17.264 "nvme_io": false, 00:37:17.264 "nvme_io_md": false, 00:37:17.264 "write_zeroes": true, 00:37:17.264 "zcopy": false, 00:37:17.264 "get_zone_info": false, 00:37:17.264 "zone_management": false, 00:37:17.264 "zone_append": false, 00:37:17.264 "compare": false, 00:37:17.264 "compare_and_write": false, 00:37:17.264 "abort": false, 00:37:17.264 "seek_hole": false, 00:37:17.264 "seek_data": false, 00:37:17.264 "copy": false, 00:37:17.264 "nvme_iov_md": false 00:37:17.264 }, 00:37:17.264 "memory_domains": [ 00:37:17.264 { 00:37:17.264 "dma_device_id": "system", 00:37:17.264 "dma_device_type": 1 00:37:17.264 }, 00:37:17.264 { 00:37:17.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:17.264 "dma_device_type": 2 00:37:17.264 }, 00:37:17.264 { 00:37:17.264 "dma_device_id": "system", 00:37:17.264 "dma_device_type": 1 00:37:17.264 }, 00:37:17.264 { 00:37:17.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:17.264 "dma_device_type": 2 00:37:17.264 } 00:37:17.264 ], 00:37:17.264 "driver_specific": { 00:37:17.264 "raid": { 00:37:17.264 "uuid": "b5ab5c0d-5809-4450-83fd-bf695476b59d", 00:37:17.264 "strip_size_kb": 0, 00:37:17.264 "state": "online", 00:37:17.264 "raid_level": "raid1", 00:37:17.264 "superblock": true, 00:37:17.264 "num_base_bdevs": 2, 00:37:17.264 "num_base_bdevs_discovered": 2, 00:37:17.264 "num_base_bdevs_operational": 2, 00:37:17.264 "base_bdevs_list": [ 00:37:17.264 { 00:37:17.264 "name": "pt1", 00:37:17.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:17.264 "is_configured": true, 00:37:17.264 "data_offset": 256, 00:37:17.264 "data_size": 7936 00:37:17.264 }, 00:37:17.264 { 00:37:17.264 "name": "pt2", 00:37:17.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:17.264 "is_configured": true, 00:37:17.264 "data_offset": 256, 00:37:17.264 "data_size": 7936 00:37:17.264 } 00:37:17.264 ] 00:37:17.264 } 00:37:17.264 } 00:37:17.264 }' 00:37:17.264 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:17.264 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:17.265 pt2' 00:37:17.265 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:17.265 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:17.265 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:17.265 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:17.265 "name": "pt1", 00:37:17.265 "aliases": [ 00:37:17.265 "00000000-0000-0000-0000-000000000001" 00:37:17.265 ], 00:37:17.265 "product_name": "passthru", 00:37:17.265 "block_size": 4096, 00:37:17.265 "num_blocks": 8192, 00:37:17.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:17.265 "assigned_rate_limits": { 00:37:17.265 "rw_ios_per_sec": 0, 00:37:17.265 "rw_mbytes_per_sec": 0, 00:37:17.265 "r_mbytes_per_sec": 0, 00:37:17.265 "w_mbytes_per_sec": 0 00:37:17.265 }, 00:37:17.265 "claimed": true, 00:37:17.265 "claim_type": "exclusive_write", 00:37:17.265 "zoned": false, 00:37:17.265 "supported_io_types": { 00:37:17.265 "read": true, 00:37:17.265 "write": true, 00:37:17.265 "unmap": true, 00:37:17.265 "flush": true, 00:37:17.265 "reset": true, 00:37:17.265 "nvme_admin": false, 00:37:17.265 "nvme_io": false, 00:37:17.265 "nvme_io_md": false, 00:37:17.265 "write_zeroes": true, 00:37:17.265 "zcopy": true, 00:37:17.265 "get_zone_info": false, 00:37:17.265 "zone_management": false, 00:37:17.265 "zone_append": false, 00:37:17.265 "compare": false, 00:37:17.265 "compare_and_write": false, 00:37:17.265 "abort": true, 00:37:17.265 "seek_hole": false, 00:37:17.265 "seek_data": false, 00:37:17.265 "copy": true, 00:37:17.265 "nvme_iov_md": false 00:37:17.265 }, 00:37:17.265 "memory_domains": [ 00:37:17.265 { 00:37:17.265 "dma_device_id": "system", 00:37:17.265 "dma_device_type": 1 00:37:17.265 }, 00:37:17.265 { 00:37:17.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:17.265 "dma_device_type": 2 00:37:17.265 } 00:37:17.265 ], 00:37:17.265 "driver_specific": { 00:37:17.265 "passthru": { 00:37:17.265 "name": "pt1", 00:37:17.265 "base_bdev_name": "malloc1" 00:37:17.265 } 00:37:17.265 } 00:37:17.265 }' 00:37:17.265 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:17.523 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:17.523 01:03:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:17.523 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:17.523 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:17.523 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:17.523 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:17.523 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:17.781 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:17.782 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:17.782 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:17.782 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:17.782 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:17.782 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:17.782 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:18.040 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:18.040 "name": "pt2", 00:37:18.040 "aliases": [ 00:37:18.040 "00000000-0000-0000-0000-000000000002" 00:37:18.040 ], 00:37:18.040 "product_name": "passthru", 00:37:18.040 "block_size": 4096, 00:37:18.040 "num_blocks": 8192, 00:37:18.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:18.040 "assigned_rate_limits": { 00:37:18.040 "rw_ios_per_sec": 0, 00:37:18.040 "rw_mbytes_per_sec": 0, 00:37:18.040 "r_mbytes_per_sec": 0, 00:37:18.040 "w_mbytes_per_sec": 0 00:37:18.040 }, 00:37:18.040 "claimed": true, 00:37:18.040 "claim_type": "exclusive_write", 00:37:18.040 "zoned": false, 00:37:18.040 "supported_io_types": { 00:37:18.040 "read": true, 00:37:18.040 "write": true, 00:37:18.040 "unmap": true, 00:37:18.040 "flush": true, 00:37:18.040 "reset": true, 00:37:18.040 "nvme_admin": false, 00:37:18.040 "nvme_io": false, 00:37:18.040 "nvme_io_md": false, 00:37:18.040 "write_zeroes": true, 00:37:18.040 "zcopy": true, 00:37:18.040 "get_zone_info": false, 00:37:18.040 "zone_management": false, 00:37:18.040 "zone_append": false, 00:37:18.040 "compare": false, 00:37:18.040 "compare_and_write": false, 00:37:18.040 "abort": true, 00:37:18.040 "seek_hole": false, 00:37:18.040 "seek_data": false, 00:37:18.040 "copy": true, 00:37:18.040 "nvme_iov_md": false 00:37:18.040 }, 00:37:18.040 "memory_domains": [ 00:37:18.040 { 00:37:18.040 "dma_device_id": "system", 00:37:18.040 "dma_device_type": 1 00:37:18.040 }, 00:37:18.040 { 00:37:18.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:18.040 "dma_device_type": 2 00:37:18.040 } 00:37:18.040 ], 00:37:18.040 "driver_specific": { 00:37:18.040 "passthru": { 00:37:18.040 "name": "pt2", 00:37:18.040 "base_bdev_name": "malloc2" 00:37:18.040 } 00:37:18.040 } 00:37:18.040 }' 00:37:18.040 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:18.040 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:18.040 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:18.040 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:18.040 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:18.301 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:18.301 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:18.301 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:18.301 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:18.301 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:18.301 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:18.301 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:18.301 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:18.301 01:03:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:37:18.560 [2024-07-25 01:03:41.047390] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:18.560 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' b5ab5c0d-5809-4450-83fd-bf695476b59d '!=' b5ab5c0d-5809-4450-83fd-bf695476b59d ']' 00:37:18.560 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:37:18.560 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:18.560 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:37:18.560 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:18.819 [2024-07-25 01:03:41.314361] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:18.819 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.077 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:19.077 "name": "raid_bdev1", 00:37:19.077 "uuid": "b5ab5c0d-5809-4450-83fd-bf695476b59d", 00:37:19.077 "strip_size_kb": 0, 00:37:19.077 "state": "online", 00:37:19.077 "raid_level": "raid1", 00:37:19.077 "superblock": true, 00:37:19.077 "num_base_bdevs": 2, 00:37:19.077 "num_base_bdevs_discovered": 1, 00:37:19.077 "num_base_bdevs_operational": 1, 00:37:19.077 "base_bdevs_list": [ 00:37:19.077 { 00:37:19.077 "name": null, 00:37:19.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:19.077 "is_configured": false, 00:37:19.077 "data_offset": 256, 00:37:19.077 "data_size": 7936 00:37:19.077 }, 00:37:19.077 { 00:37:19.077 "name": "pt2", 00:37:19.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:19.077 "is_configured": true, 00:37:19.077 "data_offset": 256, 00:37:19.077 "data_size": 7936 00:37:19.077 } 00:37:19.077 ] 00:37:19.077 }' 00:37:19.077 01:03:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:19.077 01:03:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:19.644 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:19.902 [2024-07-25 01:03:42.314547] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:19.902 [2024-07-25 01:03:42.314580] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:19.902 [2024-07-25 01:03:42.314655] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:19.902 [2024-07-25 01:03:42.314698] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:19.902 [2024-07-25 01:03:42.314707] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:37:19.902 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:19.902 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:37:20.161 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:37:20.161 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:37:20.161 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:37:20.161 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:20.161 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:20.161 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:37:20.161 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:20.161 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:37:20.419 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:37:20.419 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:37:20.419 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:20.419 [2024-07-25 01:03:42.974847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:20.419 [2024-07-25 01:03:42.974945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:20.419 [2024-07-25 01:03:42.974972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:20.419 [2024-07-25 01:03:42.974996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:20.419 [2024-07-25 01:03:42.977296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:20.419 [2024-07-25 01:03:42.977365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:20.419 [2024-07-25 01:03:42.977488] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:20.419 [2024-07-25 01:03:42.977545] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:20.419 [2024-07-25 01:03:42.977642] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:37:20.419 [2024-07-25 01:03:42.977651] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:20.419 [2024-07-25 01:03:42.977736] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:20.419 [2024-07-25 01:03:42.978008] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:37:20.419 [2024-07-25 01:03:42.978019] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:37:20.419 [2024-07-25 01:03:42.978159] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:20.419 pt2 00:37:20.419 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:20.419 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:20.419 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:20.419 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:20.419 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:20.420 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:20.420 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:20.420 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:20.420 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:20.420 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:20.420 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:20.420 01:03:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:20.678 01:03:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:20.678 "name": "raid_bdev1", 00:37:20.678 "uuid": "b5ab5c0d-5809-4450-83fd-bf695476b59d", 00:37:20.678 "strip_size_kb": 0, 00:37:20.678 "state": "online", 00:37:20.678 "raid_level": "raid1", 00:37:20.678 "superblock": true, 00:37:20.678 "num_base_bdevs": 2, 00:37:20.678 "num_base_bdevs_discovered": 1, 00:37:20.678 "num_base_bdevs_operational": 1, 00:37:20.678 "base_bdevs_list": [ 00:37:20.678 { 00:37:20.678 "name": null, 00:37:20.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.678 "is_configured": false, 00:37:20.678 "data_offset": 256, 00:37:20.678 "data_size": 7936 00:37:20.678 }, 00:37:20.678 { 00:37:20.678 "name": "pt2", 00:37:20.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:20.678 "is_configured": true, 00:37:20.678 "data_offset": 256, 00:37:20.678 "data_size": 7936 00:37:20.678 } 00:37:20.678 ] 00:37:20.678 }' 00:37:20.678 01:03:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:20.678 01:03:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:21.245 01:03:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:21.503 [2024-07-25 01:03:43.914980] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:21.503 [2024-07-25 01:03:43.915012] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:21.503 [2024-07-25 01:03:43.915088] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:21.503 [2024-07-25 01:03:43.915133] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:21.503 [2024-07-25 01:03:43.915142] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:37:21.503 01:03:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:21.503 01:03:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:21.761 [2024-07-25 01:03:44.375069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:21.761 [2024-07-25 01:03:44.375165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:21.761 [2024-07-25 01:03:44.375204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:37:21.761 [2024-07-25 01:03:44.375227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:21.761 [2024-07-25 01:03:44.377506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:21.761 [2024-07-25 01:03:44.377570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:21.761 [2024-07-25 01:03:44.377691] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:21.761 [2024-07-25 01:03:44.377743] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:21.761 [2024-07-25 01:03:44.377879] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:21.761 [2024-07-25 01:03:44.377889] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:21.761 [2024-07-25 01:03:44.377904] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:37:21.761 [2024-07-25 01:03:44.377956] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:21.761 [2024-07-25 01:03:44.378025] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:37:21.761 [2024-07-25 01:03:44.378033] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:21.761 [2024-07-25 01:03:44.378117] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:21.761 [2024-07-25 01:03:44.378410] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:37:21.761 [2024-07-25 01:03:44.378422] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:37:21.761 [2024-07-25 01:03:44.378567] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:21.761 pt1 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:21.761 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:22.019 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:22.019 "name": "raid_bdev1", 00:37:22.019 "uuid": "b5ab5c0d-5809-4450-83fd-bf695476b59d", 00:37:22.019 "strip_size_kb": 0, 00:37:22.019 "state": "online", 00:37:22.019 "raid_level": "raid1", 00:37:22.019 "superblock": true, 00:37:22.019 "num_base_bdevs": 2, 00:37:22.019 "num_base_bdevs_discovered": 1, 00:37:22.019 "num_base_bdevs_operational": 1, 00:37:22.019 "base_bdevs_list": [ 00:37:22.019 { 00:37:22.019 "name": null, 00:37:22.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:22.019 "is_configured": false, 00:37:22.019 "data_offset": 256, 00:37:22.019 "data_size": 7936 00:37:22.019 }, 00:37:22.019 { 00:37:22.019 "name": "pt2", 00:37:22.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:22.019 "is_configured": true, 00:37:22.019 "data_offset": 256, 00:37:22.019 "data_size": 7936 00:37:22.019 } 00:37:22.019 ] 00:37:22.019 }' 00:37:22.019 01:03:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:22.019 01:03:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:22.589 01:03:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:22.589 01:03:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:37:22.847 [2024-07-25 01:03:45.431455] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' b5ab5c0d-5809-4450-83fd-bf695476b59d '!=' b5ab5c0d-5809-4450-83fd-bf695476b59d ']' 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 159495 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 159495 ']' 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 159495 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 159495 00:37:22.847 killing process with pid 159495 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 159495' 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 159495 00:37:22.847 [2024-07-25 01:03:45.479378] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:22.847 01:03:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 159495 00:37:22.847 [2024-07-25 01:03:45.479439] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:22.847 [2024-07-25 01:03:45.479482] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:22.847 [2024-07-25 01:03:45.479491] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:37:23.106 [2024-07-25 01:03:45.683943] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:24.490 ************************************ 00:37:24.490 END TEST raid_superblock_test_4k 00:37:24.490 ************************************ 00:37:24.490 01:03:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:37:24.490 00:37:24.490 real 0m15.821s 00:37:24.490 user 0m27.748s 00:37:24.490 sys 0m2.452s 00:37:24.490 01:03:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:24.490 01:03:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:24.490 01:03:47 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:37:24.490 01:03:47 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:37:24.490 01:03:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:37:24.490 01:03:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:24.490 01:03:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:24.490 ************************************ 00:37:24.490 START TEST raid_rebuild_test_sb_4k 00:37:24.490 ************************************ 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=160010 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 160010 /var/tmp/spdk-raid.sock 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 160010 ']' 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:24.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:24.490 01:03:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:24.749 [2024-07-25 01:03:47.153051] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:37:24.749 [2024-07-25 01:03:47.153196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160010 ] 00:37:24.749 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:24.749 Zero copy mechanism will not be used. 00:37:24.749 [2024-07-25 01:03:47.310750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.008 [2024-07-25 01:03:47.496508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.266 [2024-07-25 01:03:47.698230] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:25.524 01:03:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:25.524 01:03:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:37:25.524 01:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:25.524 01:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:37:25.782 BaseBdev1_malloc 00:37:25.782 01:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:26.040 [2024-07-25 01:03:48.561499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:26.040 [2024-07-25 01:03:48.561598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.040 [2024-07-25 01:03:48.561636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:37:26.040 [2024-07-25 01:03:48.561667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.040 [2024-07-25 01:03:48.563979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.040 [2024-07-25 01:03:48.564054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:26.040 BaseBdev1 00:37:26.040 01:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:26.040 01:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:37:26.298 BaseBdev2_malloc 00:37:26.298 01:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:26.556 [2024-07-25 01:03:48.966560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:26.556 [2024-07-25 01:03:48.966675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.556 [2024-07-25 01:03:48.966710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:37:26.556 [2024-07-25 01:03:48.966729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.556 [2024-07-25 01:03:48.968936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.556 [2024-07-25 01:03:48.968984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:26.556 BaseBdev2 00:37:26.556 01:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:37:26.814 spare_malloc 00:37:26.814 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:26.814 spare_delay 00:37:26.814 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:27.073 [2024-07-25 01:03:49.587251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:27.073 [2024-07-25 01:03:49.587340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:27.073 [2024-07-25 01:03:49.587374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:27.073 [2024-07-25 01:03:49.587399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:27.073 [2024-07-25 01:03:49.589657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:27.073 [2024-07-25 01:03:49.589714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:27.073 spare 00:37:27.073 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:37:27.331 [2024-07-25 01:03:49.819357] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:27.331 [2024-07-25 01:03:49.821322] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:27.331 [2024-07-25 01:03:49.821538] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:37:27.331 [2024-07-25 01:03:49.821549] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:27.331 [2024-07-25 01:03:49.821669] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:37:27.331 [2024-07-25 01:03:49.821975] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:37:27.331 [2024-07-25 01:03:49.821986] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:37:27.331 [2024-07-25 01:03:49.822110] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:27.331 01:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:27.589 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:27.589 "name": "raid_bdev1", 00:37:27.589 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:27.589 "strip_size_kb": 0, 00:37:27.589 "state": "online", 00:37:27.589 "raid_level": "raid1", 00:37:27.589 "superblock": true, 00:37:27.589 "num_base_bdevs": 2, 00:37:27.589 "num_base_bdevs_discovered": 2, 00:37:27.589 "num_base_bdevs_operational": 2, 00:37:27.589 "base_bdevs_list": [ 00:37:27.589 { 00:37:27.589 "name": "BaseBdev1", 00:37:27.589 "uuid": "dacd0403-f544-5790-85a8-044e407f6992", 00:37:27.589 "is_configured": true, 00:37:27.589 "data_offset": 256, 00:37:27.589 "data_size": 7936 00:37:27.589 }, 00:37:27.589 { 00:37:27.589 "name": "BaseBdev2", 00:37:27.589 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:27.589 "is_configured": true, 00:37:27.589 "data_offset": 256, 00:37:27.589 "data_size": 7936 00:37:27.590 } 00:37:27.590 ] 00:37:27.590 }' 00:37:27.590 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:27.590 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:27.847 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:27.847 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:37:28.105 [2024-07-25 01:03:50.607638] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:28.105 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:37:28.105 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:28.105 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:28.364 01:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:28.623 [2024-07-25 01:03:51.043581] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:37:28.623 /dev/nbd0 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:28.623 1+0 records in 00:37:28.623 1+0 records out 00:37:28.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242758 s, 16.9 MB/s 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:37:28.623 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:37:29.189 7936+0 records in 00:37:29.189 7936+0 records out 00:37:29.189 32505856 bytes (33 MB, 31 MiB) copied, 0.667277 s, 48.7 MB/s 00:37:29.189 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:37:29.189 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:29.189 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:29.189 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:29.189 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:37:29.189 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:29.190 01:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:29.448 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:29.448 [2024-07-25 01:03:52.065153] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:29.448 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:29.448 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:29.448 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:29.448 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:29.448 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:29.448 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:37:29.448 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:37:29.448 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:37:29.706 [2024-07-25 01:03:52.284934] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:29.706 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:29.706 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:29.706 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:29.706 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:29.706 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:29.706 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:29.706 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:29.707 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:29.707 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:29.707 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:29.707 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:29.707 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.965 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:29.965 "name": "raid_bdev1", 00:37:29.965 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:29.965 "strip_size_kb": 0, 00:37:29.965 "state": "online", 00:37:29.965 "raid_level": "raid1", 00:37:29.965 "superblock": true, 00:37:29.965 "num_base_bdevs": 2, 00:37:29.965 "num_base_bdevs_discovered": 1, 00:37:29.965 "num_base_bdevs_operational": 1, 00:37:29.966 "base_bdevs_list": [ 00:37:29.966 { 00:37:29.966 "name": null, 00:37:29.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.966 "is_configured": false, 00:37:29.966 "data_offset": 256, 00:37:29.966 "data_size": 7936 00:37:29.966 }, 00:37:29.966 { 00:37:29.966 "name": "BaseBdev2", 00:37:29.966 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:29.966 "is_configured": true, 00:37:29.966 "data_offset": 256, 00:37:29.966 "data_size": 7936 00:37:29.966 } 00:37:29.966 ] 00:37:29.966 }' 00:37:29.966 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:29.966 01:03:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:30.534 01:03:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:30.793 [2024-07-25 01:03:53.193105] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:30.793 [2024-07-25 01:03:53.207938] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018cff0 00:37:30.793 [2024-07-25 01:03:53.209871] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:30.793 01:03:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:37:31.730 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:31.730 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:31.730 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:31.730 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:31.730 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:31.730 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:31.730 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.989 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:31.989 "name": "raid_bdev1", 00:37:31.989 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:31.989 "strip_size_kb": 0, 00:37:31.989 "state": "online", 00:37:31.989 "raid_level": "raid1", 00:37:31.989 "superblock": true, 00:37:31.989 "num_base_bdevs": 2, 00:37:31.989 "num_base_bdevs_discovered": 2, 00:37:31.989 "num_base_bdevs_operational": 2, 00:37:31.989 "process": { 00:37:31.989 "type": "rebuild", 00:37:31.989 "target": "spare", 00:37:31.989 "progress": { 00:37:31.989 "blocks": 2816, 00:37:31.989 "percent": 35 00:37:31.989 } 00:37:31.989 }, 00:37:31.989 "base_bdevs_list": [ 00:37:31.989 { 00:37:31.989 "name": "spare", 00:37:31.989 "uuid": "c3dcd583-1759-5291-8eda-24e637033a34", 00:37:31.989 "is_configured": true, 00:37:31.989 "data_offset": 256, 00:37:31.989 "data_size": 7936 00:37:31.989 }, 00:37:31.989 { 00:37:31.989 "name": "BaseBdev2", 00:37:31.989 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:31.989 "is_configured": true, 00:37:31.989 "data_offset": 256, 00:37:31.989 "data_size": 7936 00:37:31.989 } 00:37:31.989 ] 00:37:31.989 }' 00:37:31.989 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:31.989 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:31.989 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:31.989 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:31.989 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:32.249 [2024-07-25 01:03:54.751561] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:32.249 [2024-07-25 01:03:54.819361] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:32.249 [2024-07-25 01:03:54.819559] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:32.249 [2024-07-25 01:03:54.819665] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:32.249 [2024-07-25 01:03:54.819701] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:32.249 01:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.508 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:32.508 "name": "raid_bdev1", 00:37:32.508 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:32.508 "strip_size_kb": 0, 00:37:32.508 "state": "online", 00:37:32.508 "raid_level": "raid1", 00:37:32.508 "superblock": true, 00:37:32.508 "num_base_bdevs": 2, 00:37:32.508 "num_base_bdevs_discovered": 1, 00:37:32.508 "num_base_bdevs_operational": 1, 00:37:32.508 "base_bdevs_list": [ 00:37:32.508 { 00:37:32.508 "name": null, 00:37:32.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.508 "is_configured": false, 00:37:32.508 "data_offset": 256, 00:37:32.508 "data_size": 7936 00:37:32.508 }, 00:37:32.508 { 00:37:32.508 "name": "BaseBdev2", 00:37:32.508 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:32.508 "is_configured": true, 00:37:32.508 "data_offset": 256, 00:37:32.508 "data_size": 7936 00:37:32.508 } 00:37:32.508 ] 00:37:32.508 }' 00:37:32.508 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:32.508 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:33.076 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:33.076 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:33.076 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:33.076 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:33.076 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:33.076 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.076 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:33.335 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:33.335 "name": "raid_bdev1", 00:37:33.335 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:33.335 "strip_size_kb": 0, 00:37:33.335 "state": "online", 00:37:33.335 "raid_level": "raid1", 00:37:33.335 "superblock": true, 00:37:33.335 "num_base_bdevs": 2, 00:37:33.335 "num_base_bdevs_discovered": 1, 00:37:33.335 "num_base_bdevs_operational": 1, 00:37:33.335 "base_bdevs_list": [ 00:37:33.335 { 00:37:33.335 "name": null, 00:37:33.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:33.335 "is_configured": false, 00:37:33.335 "data_offset": 256, 00:37:33.335 "data_size": 7936 00:37:33.335 }, 00:37:33.335 { 00:37:33.335 "name": "BaseBdev2", 00:37:33.335 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:33.335 "is_configured": true, 00:37:33.335 "data_offset": 256, 00:37:33.335 "data_size": 7936 00:37:33.335 } 00:37:33.335 ] 00:37:33.335 }' 00:37:33.335 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:33.335 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:33.335 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:33.335 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:33.335 01:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:33.594 [2024-07-25 01:03:56.128321] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:33.594 [2024-07-25 01:03:56.144030] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:37:33.594 [2024-07-25 01:03:56.146064] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:33.594 01:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:34.532 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:34.532 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:34.532 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:34.532 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:34.532 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:34.532 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:34.532 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:34.791 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:34.791 "name": "raid_bdev1", 00:37:34.791 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:34.791 "strip_size_kb": 0, 00:37:34.791 "state": "online", 00:37:34.791 "raid_level": "raid1", 00:37:34.791 "superblock": true, 00:37:34.791 "num_base_bdevs": 2, 00:37:34.791 "num_base_bdevs_discovered": 2, 00:37:34.791 "num_base_bdevs_operational": 2, 00:37:34.791 "process": { 00:37:34.791 "type": "rebuild", 00:37:34.791 "target": "spare", 00:37:34.791 "progress": { 00:37:34.791 "blocks": 3072, 00:37:34.791 "percent": 38 00:37:34.791 } 00:37:34.791 }, 00:37:34.791 "base_bdevs_list": [ 00:37:34.791 { 00:37:34.791 "name": "spare", 00:37:34.791 "uuid": "c3dcd583-1759-5291-8eda-24e637033a34", 00:37:34.791 "is_configured": true, 00:37:34.791 "data_offset": 256, 00:37:34.791 "data_size": 7936 00:37:34.791 }, 00:37:34.791 { 00:37:34.791 "name": "BaseBdev2", 00:37:34.791 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:34.791 "is_configured": true, 00:37:34.791 "data_offset": 256, 00:37:34.791 "data_size": 7936 00:37:34.791 } 00:37:34.791 ] 00:37:34.791 }' 00:37:34.791 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:37:35.051 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1314 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.051 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:35.310 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:35.310 "name": "raid_bdev1", 00:37:35.310 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:35.310 "strip_size_kb": 0, 00:37:35.310 "state": "online", 00:37:35.310 "raid_level": "raid1", 00:37:35.310 "superblock": true, 00:37:35.310 "num_base_bdevs": 2, 00:37:35.310 "num_base_bdevs_discovered": 2, 00:37:35.310 "num_base_bdevs_operational": 2, 00:37:35.310 "process": { 00:37:35.310 "type": "rebuild", 00:37:35.310 "target": "spare", 00:37:35.310 "progress": { 00:37:35.310 "blocks": 3840, 00:37:35.310 "percent": 48 00:37:35.310 } 00:37:35.310 }, 00:37:35.310 "base_bdevs_list": [ 00:37:35.310 { 00:37:35.310 "name": "spare", 00:37:35.310 "uuid": "c3dcd583-1759-5291-8eda-24e637033a34", 00:37:35.310 "is_configured": true, 00:37:35.310 "data_offset": 256, 00:37:35.310 "data_size": 7936 00:37:35.310 }, 00:37:35.310 { 00:37:35.310 "name": "BaseBdev2", 00:37:35.310 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:35.310 "is_configured": true, 00:37:35.310 "data_offset": 256, 00:37:35.310 "data_size": 7936 00:37:35.310 } 00:37:35.310 ] 00:37:35.310 }' 00:37:35.310 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:35.310 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:35.310 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:35.310 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:35.310 01:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:36.260 01:03:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:36.260 01:03:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:36.260 01:03:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:36.260 01:03:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:36.260 01:03:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:36.260 01:03:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:36.260 01:03:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:36.260 01:03:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.568 01:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:36.568 "name": "raid_bdev1", 00:37:36.568 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:36.568 "strip_size_kb": 0, 00:37:36.568 "state": "online", 00:37:36.568 "raid_level": "raid1", 00:37:36.568 "superblock": true, 00:37:36.568 "num_base_bdevs": 2, 00:37:36.568 "num_base_bdevs_discovered": 2, 00:37:36.568 "num_base_bdevs_operational": 2, 00:37:36.568 "process": { 00:37:36.568 "type": "rebuild", 00:37:36.568 "target": "spare", 00:37:36.568 "progress": { 00:37:36.568 "blocks": 7168, 00:37:36.568 "percent": 90 00:37:36.568 } 00:37:36.568 }, 00:37:36.568 "base_bdevs_list": [ 00:37:36.568 { 00:37:36.568 "name": "spare", 00:37:36.568 "uuid": "c3dcd583-1759-5291-8eda-24e637033a34", 00:37:36.568 "is_configured": true, 00:37:36.568 "data_offset": 256, 00:37:36.568 "data_size": 7936 00:37:36.568 }, 00:37:36.568 { 00:37:36.568 "name": "BaseBdev2", 00:37:36.568 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:36.568 "is_configured": true, 00:37:36.568 "data_offset": 256, 00:37:36.568 "data_size": 7936 00:37:36.568 } 00:37:36.568 ] 00:37:36.568 }' 00:37:36.568 01:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:36.568 01:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:36.568 01:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:36.568 01:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:36.568 01:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:36.827 [2024-07-25 01:03:59.263616] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:36.827 [2024-07-25 01:03:59.263822] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:36.827 [2024-07-25 01:03:59.264016] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:37.764 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:37.764 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:37.764 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:37.764 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:37.764 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:37.764 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:37.764 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:37.764 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:37.764 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:37.764 "name": "raid_bdev1", 00:37:37.764 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:37.764 "strip_size_kb": 0, 00:37:37.764 "state": "online", 00:37:37.764 "raid_level": "raid1", 00:37:37.764 "superblock": true, 00:37:37.764 "num_base_bdevs": 2, 00:37:37.764 "num_base_bdevs_discovered": 2, 00:37:37.764 "num_base_bdevs_operational": 2, 00:37:37.764 "base_bdevs_list": [ 00:37:37.764 { 00:37:37.765 "name": "spare", 00:37:37.765 "uuid": "c3dcd583-1759-5291-8eda-24e637033a34", 00:37:37.765 "is_configured": true, 00:37:37.765 "data_offset": 256, 00:37:37.765 "data_size": 7936 00:37:37.765 }, 00:37:37.765 { 00:37:37.765 "name": "BaseBdev2", 00:37:37.765 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:37.765 "is_configured": true, 00:37:37.765 "data_offset": 256, 00:37:37.765 "data_size": 7936 00:37:37.765 } 00:37:37.765 ] 00:37:37.765 }' 00:37:37.765 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:37.765 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:37.765 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:38.024 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:37:38.024 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:37:38.024 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:38.024 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:38.024 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:38.024 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:38.024 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:38.024 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:38.024 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.024 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:38.024 "name": "raid_bdev1", 00:37:38.024 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:38.024 "strip_size_kb": 0, 00:37:38.024 "state": "online", 00:37:38.024 "raid_level": "raid1", 00:37:38.024 "superblock": true, 00:37:38.024 "num_base_bdevs": 2, 00:37:38.024 "num_base_bdevs_discovered": 2, 00:37:38.024 "num_base_bdevs_operational": 2, 00:37:38.024 "base_bdevs_list": [ 00:37:38.024 { 00:37:38.024 "name": "spare", 00:37:38.024 "uuid": "c3dcd583-1759-5291-8eda-24e637033a34", 00:37:38.024 "is_configured": true, 00:37:38.024 "data_offset": 256, 00:37:38.024 "data_size": 7936 00:37:38.024 }, 00:37:38.024 { 00:37:38.024 "name": "BaseBdev2", 00:37:38.024 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:38.024 "is_configured": true, 00:37:38.024 "data_offset": 256, 00:37:38.024 "data_size": 7936 00:37:38.024 } 00:37:38.024 ] 00:37:38.024 }' 00:37:38.024 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:38.283 "name": "raid_bdev1", 00:37:38.283 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:38.283 "strip_size_kb": 0, 00:37:38.283 "state": "online", 00:37:38.283 "raid_level": "raid1", 00:37:38.283 "superblock": true, 00:37:38.283 "num_base_bdevs": 2, 00:37:38.283 "num_base_bdevs_discovered": 2, 00:37:38.283 "num_base_bdevs_operational": 2, 00:37:38.283 "base_bdevs_list": [ 00:37:38.283 { 00:37:38.283 "name": "spare", 00:37:38.283 "uuid": "c3dcd583-1759-5291-8eda-24e637033a34", 00:37:38.283 "is_configured": true, 00:37:38.283 "data_offset": 256, 00:37:38.283 "data_size": 7936 00:37:38.283 }, 00:37:38.283 { 00:37:38.283 "name": "BaseBdev2", 00:37:38.283 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:38.283 "is_configured": true, 00:37:38.283 "data_offset": 256, 00:37:38.283 "data_size": 7936 00:37:38.283 } 00:37:38.283 ] 00:37:38.283 }' 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:38.283 01:04:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:39.217 01:04:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:39.217 [2024-07-25 01:04:01.772678] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:39.217 [2024-07-25 01:04:01.772873] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:39.217 [2024-07-25 01:04:01.773098] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:39.217 [2024-07-25 01:04:01.773269] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:39.217 [2024-07-25 01:04:01.773366] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:37:39.217 01:04:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:39.217 01:04:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:39.475 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:39.734 /dev/nbd0 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:39.734 1+0 records in 00:37:39.734 1+0 records out 00:37:39.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338348 s, 12.1 MB/s 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:39.734 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:37:39.993 /dev/nbd1 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:39.993 1+0 records in 00:37:39.993 1+0 records out 00:37:39.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375932 s, 10.9 MB/s 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:39.993 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:37:40.251 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:37:40.251 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:40.251 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:40.251 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:40.251 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:37:40.251 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:40.251 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:40.510 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:40.510 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:40.510 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:40.510 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:40.510 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:40.510 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:40.510 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:37:40.510 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:37:40.510 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:40.510 01:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:37:40.510 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:40.510 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:40.510 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:40.510 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:40.510 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:40.510 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:40.510 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:37:40.510 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:37:40.510 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:37:40.510 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:40.769 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:41.027 [2024-07-25 01:04:03.673782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:41.027 [2024-07-25 01:04:03.674015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:41.027 [2024-07-25 01:04:03.674214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:37:41.027 [2024-07-25 01:04:03.674341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:41.027 [2024-07-25 01:04:03.676729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:41.027 [2024-07-25 01:04:03.676895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:41.027 [2024-07-25 01:04:03.677119] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:41.027 [2024-07-25 01:04:03.677249] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:41.027 [2024-07-25 01:04:03.677494] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:41.027 spare 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:41.285 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:41.285 [2024-07-25 01:04:03.777684] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:37:41.285 [2024-07-25 01:04:03.777827] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:41.285 [2024-07-25 01:04:03.778042] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:37:41.285 [2024-07-25 01:04:03.778588] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:37:41.285 [2024-07-25 01:04:03.778701] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:37:41.285 [2024-07-25 01:04:03.778931] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:41.543 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:41.543 "name": "raid_bdev1", 00:37:41.543 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:41.543 "strip_size_kb": 0, 00:37:41.543 "state": "online", 00:37:41.543 "raid_level": "raid1", 00:37:41.543 "superblock": true, 00:37:41.543 "num_base_bdevs": 2, 00:37:41.543 "num_base_bdevs_discovered": 2, 00:37:41.543 "num_base_bdevs_operational": 2, 00:37:41.543 "base_bdevs_list": [ 00:37:41.543 { 00:37:41.543 "name": "spare", 00:37:41.543 "uuid": "c3dcd583-1759-5291-8eda-24e637033a34", 00:37:41.543 "is_configured": true, 00:37:41.543 "data_offset": 256, 00:37:41.543 "data_size": 7936 00:37:41.543 }, 00:37:41.543 { 00:37:41.543 "name": "BaseBdev2", 00:37:41.543 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:41.543 "is_configured": true, 00:37:41.543 "data_offset": 256, 00:37:41.543 "data_size": 7936 00:37:41.543 } 00:37:41.543 ] 00:37:41.543 }' 00:37:41.543 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:41.543 01:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:42.110 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:42.110 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:42.110 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:42.110 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:42.110 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:42.110 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:42.110 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:42.110 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:42.110 "name": "raid_bdev1", 00:37:42.110 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:42.110 "strip_size_kb": 0, 00:37:42.110 "state": "online", 00:37:42.110 "raid_level": "raid1", 00:37:42.110 "superblock": true, 00:37:42.110 "num_base_bdevs": 2, 00:37:42.110 "num_base_bdevs_discovered": 2, 00:37:42.110 "num_base_bdevs_operational": 2, 00:37:42.110 "base_bdevs_list": [ 00:37:42.110 { 00:37:42.110 "name": "spare", 00:37:42.110 "uuid": "c3dcd583-1759-5291-8eda-24e637033a34", 00:37:42.110 "is_configured": true, 00:37:42.110 "data_offset": 256, 00:37:42.110 "data_size": 7936 00:37:42.110 }, 00:37:42.110 { 00:37:42.110 "name": "BaseBdev2", 00:37:42.110 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:42.110 "is_configured": true, 00:37:42.110 "data_offset": 256, 00:37:42.110 "data_size": 7936 00:37:42.110 } 00:37:42.110 ] 00:37:42.110 }' 00:37:42.110 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:42.369 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:42.369 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:42.369 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:42.369 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:42.369 01:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:42.369 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:37:42.369 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:42.628 [2024-07-25 01:04:05.183188] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:42.628 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:42.886 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:42.886 "name": "raid_bdev1", 00:37:42.886 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:42.886 "strip_size_kb": 0, 00:37:42.886 "state": "online", 00:37:42.886 "raid_level": "raid1", 00:37:42.886 "superblock": true, 00:37:42.886 "num_base_bdevs": 2, 00:37:42.886 "num_base_bdevs_discovered": 1, 00:37:42.886 "num_base_bdevs_operational": 1, 00:37:42.886 "base_bdevs_list": [ 00:37:42.886 { 00:37:42.886 "name": null, 00:37:42.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:42.886 "is_configured": false, 00:37:42.886 "data_offset": 256, 00:37:42.886 "data_size": 7936 00:37:42.886 }, 00:37:42.886 { 00:37:42.886 "name": "BaseBdev2", 00:37:42.886 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:42.886 "is_configured": true, 00:37:42.886 "data_offset": 256, 00:37:42.886 "data_size": 7936 00:37:42.886 } 00:37:42.886 ] 00:37:42.886 }' 00:37:42.886 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:42.886 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:43.453 01:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:43.453 [2024-07-25 01:04:06.067373] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:43.453 [2024-07-25 01:04:06.067720] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:43.453 [2024-07-25 01:04:06.067860] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:43.453 [2024-07-25 01:04:06.067948] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:43.453 [2024-07-25 01:04:06.082331] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1dc0 00:37:43.453 [2024-07-25 01:04:06.084312] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:43.453 01:04:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:44.826 "name": "raid_bdev1", 00:37:44.826 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:44.826 "strip_size_kb": 0, 00:37:44.826 "state": "online", 00:37:44.826 "raid_level": "raid1", 00:37:44.826 "superblock": true, 00:37:44.826 "num_base_bdevs": 2, 00:37:44.826 "num_base_bdevs_discovered": 2, 00:37:44.826 "num_base_bdevs_operational": 2, 00:37:44.826 "process": { 00:37:44.826 "type": "rebuild", 00:37:44.826 "target": "spare", 00:37:44.826 "progress": { 00:37:44.826 "blocks": 3072, 00:37:44.826 "percent": 38 00:37:44.826 } 00:37:44.826 }, 00:37:44.826 "base_bdevs_list": [ 00:37:44.826 { 00:37:44.826 "name": "spare", 00:37:44.826 "uuid": "c3dcd583-1759-5291-8eda-24e637033a34", 00:37:44.826 "is_configured": true, 00:37:44.826 "data_offset": 256, 00:37:44.826 "data_size": 7936 00:37:44.826 }, 00:37:44.826 { 00:37:44.826 "name": "BaseBdev2", 00:37:44.826 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:44.826 "is_configured": true, 00:37:44.826 "data_offset": 256, 00:37:44.826 "data_size": 7936 00:37:44.826 } 00:37:44.826 ] 00:37:44.826 }' 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:44.826 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:45.085 [2024-07-25 01:04:07.641912] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:45.085 [2024-07-25 01:04:07.693436] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:45.085 [2024-07-25 01:04:07.693628] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:45.085 [2024-07-25 01:04:07.693675] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:45.085 [2024-07-25 01:04:07.693749] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:45.343 "name": "raid_bdev1", 00:37:45.343 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:45.343 "strip_size_kb": 0, 00:37:45.343 "state": "online", 00:37:45.343 "raid_level": "raid1", 00:37:45.343 "superblock": true, 00:37:45.343 "num_base_bdevs": 2, 00:37:45.343 "num_base_bdevs_discovered": 1, 00:37:45.343 "num_base_bdevs_operational": 1, 00:37:45.343 "base_bdevs_list": [ 00:37:45.343 { 00:37:45.343 "name": null, 00:37:45.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:45.343 "is_configured": false, 00:37:45.343 "data_offset": 256, 00:37:45.343 "data_size": 7936 00:37:45.343 }, 00:37:45.343 { 00:37:45.343 "name": "BaseBdev2", 00:37:45.343 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:45.343 "is_configured": true, 00:37:45.343 "data_offset": 256, 00:37:45.343 "data_size": 7936 00:37:45.343 } 00:37:45.343 ] 00:37:45.343 }' 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:45.343 01:04:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:46.280 01:04:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:46.280 [2024-07-25 01:04:08.816917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:46.280 [2024-07-25 01:04:08.817027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:46.280 [2024-07-25 01:04:08.817061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:37:46.280 [2024-07-25 01:04:08.817094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:46.280 [2024-07-25 01:04:08.817588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:46.280 [2024-07-25 01:04:08.817631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:46.280 [2024-07-25 01:04:08.817743] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:46.280 [2024-07-25 01:04:08.817756] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:46.280 [2024-07-25 01:04:08.817765] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:46.280 [2024-07-25 01:04:08.817801] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:46.280 [2024-07-25 01:04:08.832109] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:37:46.280 spare 00:37:46.280 [2024-07-25 01:04:08.833994] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:46.280 01:04:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:37:47.238 01:04:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:47.238 01:04:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:47.238 01:04:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:47.238 01:04:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:47.238 01:04:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:47.238 01:04:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:47.238 01:04:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:47.496 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:47.496 "name": "raid_bdev1", 00:37:47.496 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:47.496 "strip_size_kb": 0, 00:37:47.496 "state": "online", 00:37:47.496 "raid_level": "raid1", 00:37:47.496 "superblock": true, 00:37:47.496 "num_base_bdevs": 2, 00:37:47.496 "num_base_bdevs_discovered": 2, 00:37:47.496 "num_base_bdevs_operational": 2, 00:37:47.496 "process": { 00:37:47.496 "type": "rebuild", 00:37:47.497 "target": "spare", 00:37:47.497 "progress": { 00:37:47.497 "blocks": 3072, 00:37:47.497 "percent": 38 00:37:47.497 } 00:37:47.497 }, 00:37:47.497 "base_bdevs_list": [ 00:37:47.497 { 00:37:47.497 "name": "spare", 00:37:47.497 "uuid": "c3dcd583-1759-5291-8eda-24e637033a34", 00:37:47.497 "is_configured": true, 00:37:47.497 "data_offset": 256, 00:37:47.497 "data_size": 7936 00:37:47.497 }, 00:37:47.497 { 00:37:47.497 "name": "BaseBdev2", 00:37:47.497 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:47.497 "is_configured": true, 00:37:47.497 "data_offset": 256, 00:37:47.497 "data_size": 7936 00:37:47.497 } 00:37:47.497 ] 00:37:47.497 }' 00:37:47.497 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:47.497 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:47.497 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:47.755 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:47.755 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:48.013 [2024-07-25 01:04:10.408525] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:48.013 [2024-07-25 01:04:10.443146] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:48.013 [2024-07-25 01:04:10.443220] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:48.013 [2024-07-25 01:04:10.443235] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:48.013 [2024-07-25 01:04:10.443242] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:48.013 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:48.013 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:48.013 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:48.013 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:48.013 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:48.013 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:48.013 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:48.013 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:48.013 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:48.013 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:48.013 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:48.014 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:48.273 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:48.273 "name": "raid_bdev1", 00:37:48.273 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:48.273 "strip_size_kb": 0, 00:37:48.273 "state": "online", 00:37:48.273 "raid_level": "raid1", 00:37:48.273 "superblock": true, 00:37:48.273 "num_base_bdevs": 2, 00:37:48.273 "num_base_bdevs_discovered": 1, 00:37:48.273 "num_base_bdevs_operational": 1, 00:37:48.273 "base_bdevs_list": [ 00:37:48.273 { 00:37:48.273 "name": null, 00:37:48.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:48.273 "is_configured": false, 00:37:48.273 "data_offset": 256, 00:37:48.273 "data_size": 7936 00:37:48.273 }, 00:37:48.273 { 00:37:48.273 "name": "BaseBdev2", 00:37:48.273 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:48.273 "is_configured": true, 00:37:48.273 "data_offset": 256, 00:37:48.273 "data_size": 7936 00:37:48.273 } 00:37:48.273 ] 00:37:48.273 }' 00:37:48.273 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:48.273 01:04:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:48.532 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:48.532 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:48.532 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:48.532 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:48.532 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:48.532 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:48.532 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:48.791 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:48.791 "name": "raid_bdev1", 00:37:48.791 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:48.791 "strip_size_kb": 0, 00:37:48.791 "state": "online", 00:37:48.791 "raid_level": "raid1", 00:37:48.791 "superblock": true, 00:37:48.791 "num_base_bdevs": 2, 00:37:48.791 "num_base_bdevs_discovered": 1, 00:37:48.791 "num_base_bdevs_operational": 1, 00:37:48.791 "base_bdevs_list": [ 00:37:48.791 { 00:37:48.791 "name": null, 00:37:48.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:48.791 "is_configured": false, 00:37:48.791 "data_offset": 256, 00:37:48.791 "data_size": 7936 00:37:48.791 }, 00:37:48.791 { 00:37:48.791 "name": "BaseBdev2", 00:37:48.791 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:48.791 "is_configured": true, 00:37:48.791 "data_offset": 256, 00:37:48.792 "data_size": 7936 00:37:48.792 } 00:37:48.792 ] 00:37:48.792 }' 00:37:48.792 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:49.050 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:49.050 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:49.050 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:49.050 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:37:49.050 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:49.309 [2024-07-25 01:04:11.930160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:49.309 [2024-07-25 01:04:11.930259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:49.309 [2024-07-25 01:04:11.930297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:37:49.309 [2024-07-25 01:04:11.930322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:49.309 [2024-07-25 01:04:11.930757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:49.309 [2024-07-25 01:04:11.930794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:49.309 [2024-07-25 01:04:11.930905] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:49.309 [2024-07-25 01:04:11.930918] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:49.309 [2024-07-25 01:04:11.930926] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:49.309 BaseBdev1 00:37:49.309 01:04:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:37:50.695 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:50.696 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:50.696 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:50.696 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:50.696 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:50.696 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:50.696 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:50.696 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:50.696 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:50.696 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:50.696 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:50.696 01:04:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:50.696 01:04:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:50.696 "name": "raid_bdev1", 00:37:50.696 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:50.696 "strip_size_kb": 0, 00:37:50.696 "state": "online", 00:37:50.696 "raid_level": "raid1", 00:37:50.696 "superblock": true, 00:37:50.696 "num_base_bdevs": 2, 00:37:50.696 "num_base_bdevs_discovered": 1, 00:37:50.696 "num_base_bdevs_operational": 1, 00:37:50.696 "base_bdevs_list": [ 00:37:50.696 { 00:37:50.696 "name": null, 00:37:50.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:50.696 "is_configured": false, 00:37:50.696 "data_offset": 256, 00:37:50.696 "data_size": 7936 00:37:50.696 }, 00:37:50.696 { 00:37:50.696 "name": "BaseBdev2", 00:37:50.696 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:50.696 "is_configured": true, 00:37:50.696 "data_offset": 256, 00:37:50.696 "data_size": 7936 00:37:50.696 } 00:37:50.696 ] 00:37:50.696 }' 00:37:50.696 01:04:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:50.696 01:04:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:51.264 01:04:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:51.264 01:04:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:51.264 01:04:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:51.264 01:04:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:51.264 01:04:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:51.264 01:04:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:51.264 01:04:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:51.523 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:51.523 "name": "raid_bdev1", 00:37:51.523 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:51.523 "strip_size_kb": 0, 00:37:51.523 "state": "online", 00:37:51.523 "raid_level": "raid1", 00:37:51.523 "superblock": true, 00:37:51.523 "num_base_bdevs": 2, 00:37:51.523 "num_base_bdevs_discovered": 1, 00:37:51.523 "num_base_bdevs_operational": 1, 00:37:51.523 "base_bdevs_list": [ 00:37:51.523 { 00:37:51.523 "name": null, 00:37:51.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:51.523 "is_configured": false, 00:37:51.523 "data_offset": 256, 00:37:51.523 "data_size": 7936 00:37:51.523 }, 00:37:51.523 { 00:37:51.523 "name": "BaseBdev2", 00:37:51.524 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:51.524 "is_configured": true, 00:37:51.524 "data_offset": 256, 00:37:51.524 "data_size": 7936 00:37:51.524 } 00:37:51.524 ] 00:37:51.524 }' 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:51.524 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:51.782 [2024-07-25 01:04:14.411069] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:51.782 [2024-07-25 01:04:14.411393] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:51.782 [2024-07-25 01:04:14.411533] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:51.782 request: 00:37:51.782 { 00:37:51.782 "base_bdev": "BaseBdev1", 00:37:51.782 "raid_bdev": "raid_bdev1", 00:37:51.782 "method": "bdev_raid_add_base_bdev", 00:37:51.782 "req_id": 1 00:37:51.782 } 00:37:51.782 Got JSON-RPC error response 00:37:51.782 response: 00:37:51.782 { 00:37:51.782 "code": -22, 00:37:51.782 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:37:51.782 } 00:37:51.782 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:37:51.782 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:51.782 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:51.782 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:51.782 01:04:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:53.159 "name": "raid_bdev1", 00:37:53.159 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:53.159 "strip_size_kb": 0, 00:37:53.159 "state": "online", 00:37:53.159 "raid_level": "raid1", 00:37:53.159 "superblock": true, 00:37:53.159 "num_base_bdevs": 2, 00:37:53.159 "num_base_bdevs_discovered": 1, 00:37:53.159 "num_base_bdevs_operational": 1, 00:37:53.159 "base_bdevs_list": [ 00:37:53.159 { 00:37:53.159 "name": null, 00:37:53.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:53.159 "is_configured": false, 00:37:53.159 "data_offset": 256, 00:37:53.159 "data_size": 7936 00:37:53.159 }, 00:37:53.159 { 00:37:53.159 "name": "BaseBdev2", 00:37:53.159 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:53.159 "is_configured": true, 00:37:53.159 "data_offset": 256, 00:37:53.159 "data_size": 7936 00:37:53.159 } 00:37:53.159 ] 00:37:53.159 }' 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:53.159 01:04:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:53.726 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:53.726 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:53.726 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:53.726 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:53.726 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:53.726 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:53.726 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:53.985 "name": "raid_bdev1", 00:37:53.985 "uuid": "601516b4-16ea-42c3-b0e2-ddca8989033a", 00:37:53.985 "strip_size_kb": 0, 00:37:53.985 "state": "online", 00:37:53.985 "raid_level": "raid1", 00:37:53.985 "superblock": true, 00:37:53.985 "num_base_bdevs": 2, 00:37:53.985 "num_base_bdevs_discovered": 1, 00:37:53.985 "num_base_bdevs_operational": 1, 00:37:53.985 "base_bdevs_list": [ 00:37:53.985 { 00:37:53.985 "name": null, 00:37:53.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:53.985 "is_configured": false, 00:37:53.985 "data_offset": 256, 00:37:53.985 "data_size": 7936 00:37:53.985 }, 00:37:53.985 { 00:37:53.985 "name": "BaseBdev2", 00:37:53.985 "uuid": "bb03b4e6-9aab-53f0-9447-e035172e37ca", 00:37:53.985 "is_configured": true, 00:37:53.985 "data_offset": 256, 00:37:53.985 "data_size": 7936 00:37:53.985 } 00:37:53.985 ] 00:37:53.985 }' 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 160010 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 160010 ']' 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 160010 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160010 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160010' 00:37:53.985 killing process with pid 160010 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 160010 00:37:53.985 Received shutdown signal, test time was about 60.000000 seconds 00:37:53.985 00:37:53.985 Latency(us) 00:37:53.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.985 =================================================================================================================== 00:37:53.985 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:53.985 01:04:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 160010 00:37:53.985 [2024-07-25 01:04:16.548766] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:53.985 [2024-07-25 01:04:16.548890] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:53.985 [2024-07-25 01:04:16.548935] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:53.985 [2024-07-25 01:04:16.548944] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:37:54.244 [2024-07-25 01:04:16.823668] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:55.622 01:04:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:37:55.622 00:37:55.622 real 0m30.923s 00:37:55.622 user 0m47.510s 00:37:55.622 sys 0m4.249s 00:37:55.622 01:04:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:55.622 01:04:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:55.622 ************************************ 00:37:55.622 END TEST raid_rebuild_test_sb_4k 00:37:55.622 ************************************ 00:37:55.622 01:04:18 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:37:55.622 01:04:18 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:37:55.622 01:04:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:37:55.622 01:04:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:55.622 01:04:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:55.622 ************************************ 00:37:55.622 START TEST raid_state_function_test_sb_md_separate 00:37:55.622 ************************************ 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=160883 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 160883' 00:37:55.622 Process raid pid: 160883 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 160883 /var/tmp/spdk-raid.sock 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 160883 ']' 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:55.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:55.622 01:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:55.622 [2024-07-25 01:04:18.148200] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:37:55.623 [2024-07-25 01:04:18.148362] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:55.882 [2024-07-25 01:04:18.307116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.882 [2024-07-25 01:04:18.492937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.141 [2024-07-25 01:04:18.695850] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:56.400 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:56.400 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:37:56.400 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:56.659 [2024-07-25 01:04:19.225536] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:56.659 [2024-07-25 01:04:19.225616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:56.659 [2024-07-25 01:04:19.225627] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:56.659 [2024-07-25 01:04:19.225670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:56.659 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:56.918 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:56.918 "name": "Existed_Raid", 00:37:56.918 "uuid": "e3d3d74b-787e-4bac-ad0e-db4df3ce22ba", 00:37:56.918 "strip_size_kb": 0, 00:37:56.918 "state": "configuring", 00:37:56.918 "raid_level": "raid1", 00:37:56.918 "superblock": true, 00:37:56.918 "num_base_bdevs": 2, 00:37:56.918 "num_base_bdevs_discovered": 0, 00:37:56.918 "num_base_bdevs_operational": 2, 00:37:56.918 "base_bdevs_list": [ 00:37:56.918 { 00:37:56.918 "name": "BaseBdev1", 00:37:56.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:56.918 "is_configured": false, 00:37:56.918 "data_offset": 0, 00:37:56.918 "data_size": 0 00:37:56.918 }, 00:37:56.918 { 00:37:56.918 "name": "BaseBdev2", 00:37:56.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:56.918 "is_configured": false, 00:37:56.918 "data_offset": 0, 00:37:56.918 "data_size": 0 00:37:56.918 } 00:37:56.918 ] 00:37:56.918 }' 00:37:56.918 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:56.918 01:04:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:57.487 01:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:57.746 [2024-07-25 01:04:20.277607] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:57.746 [2024-07-25 01:04:20.277638] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:37:57.746 01:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:58.005 [2024-07-25 01:04:20.513676] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:58.005 [2024-07-25 01:04:20.513744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:58.005 [2024-07-25 01:04:20.513753] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:58.005 [2024-07-25 01:04:20.513776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:58.005 01:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:37:58.264 [2024-07-25 01:04:20.731030] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:58.264 BaseBdev1 00:37:58.264 01:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:37:58.264 01:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:37:58.264 01:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:58.264 01:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:37:58.264 01:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:58.264 01:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:58.264 01:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:58.523 01:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:58.523 [ 00:37:58.523 { 00:37:58.523 "name": "BaseBdev1", 00:37:58.523 "aliases": [ 00:37:58.523 "bca7a9f8-b058-4bec-863f-603dfe729ed9" 00:37:58.523 ], 00:37:58.523 "product_name": "Malloc disk", 00:37:58.523 "block_size": 4096, 00:37:58.523 "num_blocks": 8192, 00:37:58.523 "uuid": "bca7a9f8-b058-4bec-863f-603dfe729ed9", 00:37:58.523 "md_size": 32, 00:37:58.523 "md_interleave": false, 00:37:58.523 "dif_type": 0, 00:37:58.523 "assigned_rate_limits": { 00:37:58.523 "rw_ios_per_sec": 0, 00:37:58.523 "rw_mbytes_per_sec": 0, 00:37:58.523 "r_mbytes_per_sec": 0, 00:37:58.523 "w_mbytes_per_sec": 0 00:37:58.523 }, 00:37:58.523 "claimed": true, 00:37:58.523 "claim_type": "exclusive_write", 00:37:58.523 "zoned": false, 00:37:58.523 "supported_io_types": { 00:37:58.523 "read": true, 00:37:58.523 "write": true, 00:37:58.523 "unmap": true, 00:37:58.523 "flush": true, 00:37:58.523 "reset": true, 00:37:58.523 "nvme_admin": false, 00:37:58.523 "nvme_io": false, 00:37:58.523 "nvme_io_md": false, 00:37:58.523 "write_zeroes": true, 00:37:58.523 "zcopy": true, 00:37:58.523 "get_zone_info": false, 00:37:58.523 "zone_management": false, 00:37:58.523 "zone_append": false, 00:37:58.523 "compare": false, 00:37:58.523 "compare_and_write": false, 00:37:58.523 "abort": true, 00:37:58.523 "seek_hole": false, 00:37:58.523 "seek_data": false, 00:37:58.523 "copy": true, 00:37:58.523 "nvme_iov_md": false 00:37:58.523 }, 00:37:58.523 "memory_domains": [ 00:37:58.523 { 00:37:58.523 "dma_device_id": "system", 00:37:58.523 "dma_device_type": 1 00:37:58.523 }, 00:37:58.523 { 00:37:58.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:58.523 "dma_device_type": 2 00:37:58.523 } 00:37:58.523 ], 00:37:58.523 "driver_specific": {} 00:37:58.523 } 00:37:58.523 ] 00:37:58.523 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:37:58.523 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:58.523 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:58.523 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:58.523 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:58.523 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:58.523 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:58.523 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:58.828 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:58.828 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:58.828 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:58.828 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:58.828 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:58.828 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:58.828 "name": "Existed_Raid", 00:37:58.828 "uuid": "e44cd40b-cf53-491e-83ba-d64ebfe8ab77", 00:37:58.828 "strip_size_kb": 0, 00:37:58.828 "state": "configuring", 00:37:58.828 "raid_level": "raid1", 00:37:58.828 "superblock": true, 00:37:58.828 "num_base_bdevs": 2, 00:37:58.828 "num_base_bdevs_discovered": 1, 00:37:58.828 "num_base_bdevs_operational": 2, 00:37:58.828 "base_bdevs_list": [ 00:37:58.828 { 00:37:58.828 "name": "BaseBdev1", 00:37:58.828 "uuid": "bca7a9f8-b058-4bec-863f-603dfe729ed9", 00:37:58.828 "is_configured": true, 00:37:58.828 "data_offset": 256, 00:37:58.828 "data_size": 7936 00:37:58.828 }, 00:37:58.828 { 00:37:58.828 "name": "BaseBdev2", 00:37:58.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:58.828 "is_configured": false, 00:37:58.828 "data_offset": 0, 00:37:58.828 "data_size": 0 00:37:58.828 } 00:37:58.828 ] 00:37:58.828 }' 00:37:58.828 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:58.828 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:59.418 01:04:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:59.677 [2024-07-25 01:04:22.135275] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:59.677 [2024-07-25 01:04:22.135341] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:37:59.677 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:59.936 [2024-07-25 01:04:22.395402] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:59.936 [2024-07-25 01:04:22.397376] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:59.936 [2024-07-25 01:04:22.397450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:59.936 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:00.195 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:00.195 "name": "Existed_Raid", 00:38:00.195 "uuid": "6d9ce271-fac0-4a48-8e89-f87bd1db6b40", 00:38:00.195 "strip_size_kb": 0, 00:38:00.195 "state": "configuring", 00:38:00.195 "raid_level": "raid1", 00:38:00.195 "superblock": true, 00:38:00.195 "num_base_bdevs": 2, 00:38:00.195 "num_base_bdevs_discovered": 1, 00:38:00.195 "num_base_bdevs_operational": 2, 00:38:00.195 "base_bdevs_list": [ 00:38:00.195 { 00:38:00.195 "name": "BaseBdev1", 00:38:00.195 "uuid": "bca7a9f8-b058-4bec-863f-603dfe729ed9", 00:38:00.195 "is_configured": true, 00:38:00.195 "data_offset": 256, 00:38:00.195 "data_size": 7936 00:38:00.195 }, 00:38:00.195 { 00:38:00.195 "name": "BaseBdev2", 00:38:00.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.195 "is_configured": false, 00:38:00.195 "data_offset": 0, 00:38:00.195 "data_size": 0 00:38:00.195 } 00:38:00.195 ] 00:38:00.195 }' 00:38:00.195 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:00.195 01:04:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:00.455 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:38:01.022 [2024-07-25 01:04:23.366993] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:01.022 [2024-07-25 01:04:23.367180] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:38:01.022 [2024-07-25 01:04:23.367191] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:01.022 [2024-07-25 01:04:23.367307] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:38:01.022 [2024-07-25 01:04:23.367401] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:38:01.022 [2024-07-25 01:04:23.367410] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:38:01.022 [2024-07-25 01:04:23.367500] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:01.022 BaseBdev2 00:38:01.022 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:38:01.022 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:38:01.022 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:01.022 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:38:01.022 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:01.022 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:01.022 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:01.022 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:01.281 [ 00:38:01.281 { 00:38:01.281 "name": "BaseBdev2", 00:38:01.281 "aliases": [ 00:38:01.281 "b5aefd53-7612-422c-836b-2f8e72056435" 00:38:01.281 ], 00:38:01.281 "product_name": "Malloc disk", 00:38:01.281 "block_size": 4096, 00:38:01.281 "num_blocks": 8192, 00:38:01.281 "uuid": "b5aefd53-7612-422c-836b-2f8e72056435", 00:38:01.281 "md_size": 32, 00:38:01.281 "md_interleave": false, 00:38:01.281 "dif_type": 0, 00:38:01.281 "assigned_rate_limits": { 00:38:01.281 "rw_ios_per_sec": 0, 00:38:01.281 "rw_mbytes_per_sec": 0, 00:38:01.281 "r_mbytes_per_sec": 0, 00:38:01.281 "w_mbytes_per_sec": 0 00:38:01.281 }, 00:38:01.281 "claimed": true, 00:38:01.281 "claim_type": "exclusive_write", 00:38:01.281 "zoned": false, 00:38:01.281 "supported_io_types": { 00:38:01.281 "read": true, 00:38:01.281 "write": true, 00:38:01.281 "unmap": true, 00:38:01.281 "flush": true, 00:38:01.281 "reset": true, 00:38:01.281 "nvme_admin": false, 00:38:01.281 "nvme_io": false, 00:38:01.281 "nvme_io_md": false, 00:38:01.281 "write_zeroes": true, 00:38:01.281 "zcopy": true, 00:38:01.281 "get_zone_info": false, 00:38:01.281 "zone_management": false, 00:38:01.281 "zone_append": false, 00:38:01.281 "compare": false, 00:38:01.281 "compare_and_write": false, 00:38:01.281 "abort": true, 00:38:01.281 "seek_hole": false, 00:38:01.281 "seek_data": false, 00:38:01.281 "copy": true, 00:38:01.281 "nvme_iov_md": false 00:38:01.281 }, 00:38:01.281 "memory_domains": [ 00:38:01.281 { 00:38:01.281 "dma_device_id": "system", 00:38:01.281 "dma_device_type": 1 00:38:01.281 }, 00:38:01.281 { 00:38:01.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:01.281 "dma_device_type": 2 00:38:01.281 } 00:38:01.281 ], 00:38:01.281 "driver_specific": {} 00:38:01.281 } 00:38:01.281 ] 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:01.281 01:04:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:01.541 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:01.541 "name": "Existed_Raid", 00:38:01.541 "uuid": "6d9ce271-fac0-4a48-8e89-f87bd1db6b40", 00:38:01.541 "strip_size_kb": 0, 00:38:01.541 "state": "online", 00:38:01.541 "raid_level": "raid1", 00:38:01.541 "superblock": true, 00:38:01.541 "num_base_bdevs": 2, 00:38:01.541 "num_base_bdevs_discovered": 2, 00:38:01.541 "num_base_bdevs_operational": 2, 00:38:01.541 "base_bdevs_list": [ 00:38:01.541 { 00:38:01.541 "name": "BaseBdev1", 00:38:01.541 "uuid": "bca7a9f8-b058-4bec-863f-603dfe729ed9", 00:38:01.541 "is_configured": true, 00:38:01.541 "data_offset": 256, 00:38:01.541 "data_size": 7936 00:38:01.541 }, 00:38:01.541 { 00:38:01.541 "name": "BaseBdev2", 00:38:01.541 "uuid": "b5aefd53-7612-422c-836b-2f8e72056435", 00:38:01.541 "is_configured": true, 00:38:01.541 "data_offset": 256, 00:38:01.541 "data_size": 7936 00:38:01.541 } 00:38:01.541 ] 00:38:01.541 }' 00:38:01.541 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:01.541 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:02.108 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:38:02.108 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:38:02.108 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:02.108 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:02.108 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:02.108 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:38:02.108 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:38:02.108 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:02.368 [2024-07-25 01:04:24.779499] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:02.368 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:02.368 "name": "Existed_Raid", 00:38:02.368 "aliases": [ 00:38:02.368 "6d9ce271-fac0-4a48-8e89-f87bd1db6b40" 00:38:02.368 ], 00:38:02.368 "product_name": "Raid Volume", 00:38:02.368 "block_size": 4096, 00:38:02.368 "num_blocks": 7936, 00:38:02.368 "uuid": "6d9ce271-fac0-4a48-8e89-f87bd1db6b40", 00:38:02.368 "md_size": 32, 00:38:02.368 "md_interleave": false, 00:38:02.368 "dif_type": 0, 00:38:02.368 "assigned_rate_limits": { 00:38:02.368 "rw_ios_per_sec": 0, 00:38:02.368 "rw_mbytes_per_sec": 0, 00:38:02.368 "r_mbytes_per_sec": 0, 00:38:02.368 "w_mbytes_per_sec": 0 00:38:02.368 }, 00:38:02.368 "claimed": false, 00:38:02.368 "zoned": false, 00:38:02.368 "supported_io_types": { 00:38:02.368 "read": true, 00:38:02.368 "write": true, 00:38:02.368 "unmap": false, 00:38:02.368 "flush": false, 00:38:02.368 "reset": true, 00:38:02.368 "nvme_admin": false, 00:38:02.368 "nvme_io": false, 00:38:02.368 "nvme_io_md": false, 00:38:02.368 "write_zeroes": true, 00:38:02.368 "zcopy": false, 00:38:02.368 "get_zone_info": false, 00:38:02.368 "zone_management": false, 00:38:02.368 "zone_append": false, 00:38:02.368 "compare": false, 00:38:02.368 "compare_and_write": false, 00:38:02.368 "abort": false, 00:38:02.368 "seek_hole": false, 00:38:02.368 "seek_data": false, 00:38:02.368 "copy": false, 00:38:02.368 "nvme_iov_md": false 00:38:02.368 }, 00:38:02.368 "memory_domains": [ 00:38:02.368 { 00:38:02.368 "dma_device_id": "system", 00:38:02.368 "dma_device_type": 1 00:38:02.368 }, 00:38:02.368 { 00:38:02.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:02.368 "dma_device_type": 2 00:38:02.368 }, 00:38:02.368 { 00:38:02.368 "dma_device_id": "system", 00:38:02.368 "dma_device_type": 1 00:38:02.368 }, 00:38:02.368 { 00:38:02.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:02.368 "dma_device_type": 2 00:38:02.368 } 00:38:02.368 ], 00:38:02.368 "driver_specific": { 00:38:02.368 "raid": { 00:38:02.368 "uuid": "6d9ce271-fac0-4a48-8e89-f87bd1db6b40", 00:38:02.368 "strip_size_kb": 0, 00:38:02.368 "state": "online", 00:38:02.368 "raid_level": "raid1", 00:38:02.368 "superblock": true, 00:38:02.368 "num_base_bdevs": 2, 00:38:02.368 "num_base_bdevs_discovered": 2, 00:38:02.368 "num_base_bdevs_operational": 2, 00:38:02.368 "base_bdevs_list": [ 00:38:02.368 { 00:38:02.368 "name": "BaseBdev1", 00:38:02.368 "uuid": "bca7a9f8-b058-4bec-863f-603dfe729ed9", 00:38:02.368 "is_configured": true, 00:38:02.368 "data_offset": 256, 00:38:02.368 "data_size": 7936 00:38:02.368 }, 00:38:02.368 { 00:38:02.368 "name": "BaseBdev2", 00:38:02.368 "uuid": "b5aefd53-7612-422c-836b-2f8e72056435", 00:38:02.368 "is_configured": true, 00:38:02.368 "data_offset": 256, 00:38:02.368 "data_size": 7936 00:38:02.368 } 00:38:02.368 ] 00:38:02.368 } 00:38:02.368 } 00:38:02.368 }' 00:38:02.368 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:02.368 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:38:02.368 BaseBdev2' 00:38:02.368 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:02.368 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:38:02.368 01:04:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:02.627 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:02.627 "name": "BaseBdev1", 00:38:02.627 "aliases": [ 00:38:02.627 "bca7a9f8-b058-4bec-863f-603dfe729ed9" 00:38:02.627 ], 00:38:02.627 "product_name": "Malloc disk", 00:38:02.627 "block_size": 4096, 00:38:02.627 "num_blocks": 8192, 00:38:02.627 "uuid": "bca7a9f8-b058-4bec-863f-603dfe729ed9", 00:38:02.627 "md_size": 32, 00:38:02.627 "md_interleave": false, 00:38:02.627 "dif_type": 0, 00:38:02.627 "assigned_rate_limits": { 00:38:02.627 "rw_ios_per_sec": 0, 00:38:02.627 "rw_mbytes_per_sec": 0, 00:38:02.627 "r_mbytes_per_sec": 0, 00:38:02.627 "w_mbytes_per_sec": 0 00:38:02.627 }, 00:38:02.627 "claimed": true, 00:38:02.627 "claim_type": "exclusive_write", 00:38:02.627 "zoned": false, 00:38:02.627 "supported_io_types": { 00:38:02.627 "read": true, 00:38:02.627 "write": true, 00:38:02.627 "unmap": true, 00:38:02.627 "flush": true, 00:38:02.627 "reset": true, 00:38:02.627 "nvme_admin": false, 00:38:02.628 "nvme_io": false, 00:38:02.628 "nvme_io_md": false, 00:38:02.628 "write_zeroes": true, 00:38:02.628 "zcopy": true, 00:38:02.628 "get_zone_info": false, 00:38:02.628 "zone_management": false, 00:38:02.628 "zone_append": false, 00:38:02.628 "compare": false, 00:38:02.628 "compare_and_write": false, 00:38:02.628 "abort": true, 00:38:02.628 "seek_hole": false, 00:38:02.628 "seek_data": false, 00:38:02.628 "copy": true, 00:38:02.628 "nvme_iov_md": false 00:38:02.628 }, 00:38:02.628 "memory_domains": [ 00:38:02.628 { 00:38:02.628 "dma_device_id": "system", 00:38:02.628 "dma_device_type": 1 00:38:02.628 }, 00:38:02.628 { 00:38:02.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:02.628 "dma_device_type": 2 00:38:02.628 } 00:38:02.628 ], 00:38:02.628 "driver_specific": {} 00:38:02.628 }' 00:38:02.628 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:02.628 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:02.628 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:38:02.628 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:02.628 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:02.628 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:02.628 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:02.628 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:02.887 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:38:02.887 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:02.887 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:02.887 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:02.887 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:02.887 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:38:02.887 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:03.146 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:03.146 "name": "BaseBdev2", 00:38:03.146 "aliases": [ 00:38:03.146 "b5aefd53-7612-422c-836b-2f8e72056435" 00:38:03.146 ], 00:38:03.146 "product_name": "Malloc disk", 00:38:03.146 "block_size": 4096, 00:38:03.146 "num_blocks": 8192, 00:38:03.146 "uuid": "b5aefd53-7612-422c-836b-2f8e72056435", 00:38:03.146 "md_size": 32, 00:38:03.146 "md_interleave": false, 00:38:03.146 "dif_type": 0, 00:38:03.146 "assigned_rate_limits": { 00:38:03.146 "rw_ios_per_sec": 0, 00:38:03.146 "rw_mbytes_per_sec": 0, 00:38:03.146 "r_mbytes_per_sec": 0, 00:38:03.146 "w_mbytes_per_sec": 0 00:38:03.146 }, 00:38:03.146 "claimed": true, 00:38:03.146 "claim_type": "exclusive_write", 00:38:03.146 "zoned": false, 00:38:03.146 "supported_io_types": { 00:38:03.146 "read": true, 00:38:03.146 "write": true, 00:38:03.146 "unmap": true, 00:38:03.146 "flush": true, 00:38:03.146 "reset": true, 00:38:03.146 "nvme_admin": false, 00:38:03.146 "nvme_io": false, 00:38:03.146 "nvme_io_md": false, 00:38:03.146 "write_zeroes": true, 00:38:03.146 "zcopy": true, 00:38:03.146 "get_zone_info": false, 00:38:03.146 "zone_management": false, 00:38:03.146 "zone_append": false, 00:38:03.146 "compare": false, 00:38:03.146 "compare_and_write": false, 00:38:03.146 "abort": true, 00:38:03.146 "seek_hole": false, 00:38:03.146 "seek_data": false, 00:38:03.146 "copy": true, 00:38:03.146 "nvme_iov_md": false 00:38:03.146 }, 00:38:03.146 "memory_domains": [ 00:38:03.146 { 00:38:03.146 "dma_device_id": "system", 00:38:03.146 "dma_device_type": 1 00:38:03.146 }, 00:38:03.146 { 00:38:03.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:03.146 "dma_device_type": 2 00:38:03.146 } 00:38:03.146 ], 00:38:03.146 "driver_specific": {} 00:38:03.146 }' 00:38:03.146 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:03.146 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:03.146 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:38:03.146 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:03.146 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:03.146 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:03.146 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:03.405 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:03.405 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:38:03.405 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:03.405 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:03.405 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:03.405 01:04:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:38:03.664 [2024-07-25 01:04:26.219631] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:03.924 "name": "Existed_Raid", 00:38:03.924 "uuid": "6d9ce271-fac0-4a48-8e89-f87bd1db6b40", 00:38:03.924 "strip_size_kb": 0, 00:38:03.924 "state": "online", 00:38:03.924 "raid_level": "raid1", 00:38:03.924 "superblock": true, 00:38:03.924 "num_base_bdevs": 2, 00:38:03.924 "num_base_bdevs_discovered": 1, 00:38:03.924 "num_base_bdevs_operational": 1, 00:38:03.924 "base_bdevs_list": [ 00:38:03.924 { 00:38:03.924 "name": null, 00:38:03.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:03.924 "is_configured": false, 00:38:03.924 "data_offset": 256, 00:38:03.924 "data_size": 7936 00:38:03.924 }, 00:38:03.924 { 00:38:03.924 "name": "BaseBdev2", 00:38:03.924 "uuid": "b5aefd53-7612-422c-836b-2f8e72056435", 00:38:03.924 "is_configured": true, 00:38:03.924 "data_offset": 256, 00:38:03.924 "data_size": 7936 00:38:03.924 } 00:38:03.924 ] 00:38:03.924 }' 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:03.924 01:04:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:04.860 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:38:04.860 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:38:04.860 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:38:04.860 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:04.860 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:38:04.860 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:04.860 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:38:05.119 [2024-07-25 01:04:27.623549] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:05.119 [2024-07-25 01:04:27.623662] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:05.119 [2024-07-25 01:04:27.732452] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:05.119 [2024-07-25 01:04:27.732500] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:05.119 [2024-07-25 01:04:27.732509] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:38:05.119 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:38:05.119 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:38:05.119 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:05.119 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:38:05.378 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:38:05.378 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:38:05.378 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:38:05.378 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 160883 00:38:05.379 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 160883 ']' 00:38:05.379 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 160883 00:38:05.379 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:38:05.379 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:05.379 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160883 00:38:05.379 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:05.379 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:05.379 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160883' 00:38:05.379 killing process with pid 160883 00:38:05.379 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 160883 00:38:05.379 [2024-07-25 01:04:27.958056] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:05.379 [2024-07-25 01:04:27.958164] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:05.379 01:04:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 160883 00:38:06.757 ************************************ 00:38:06.757 END TEST raid_state_function_test_sb_md_separate 00:38:06.757 ************************************ 00:38:06.757 01:04:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:38:06.757 00:38:06.757 real 0m11.207s 00:38:06.757 user 0m18.857s 00:38:06.757 sys 0m1.829s 00:38:06.757 01:04:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:06.757 01:04:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:06.757 01:04:29 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:38:06.757 01:04:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:38:06.757 01:04:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:06.757 01:04:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:06.757 ************************************ 00:38:06.757 START TEST raid_superblock_test_md_separate 00:38:06.757 ************************************ 00:38:06.757 01:04:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:38:06.757 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:38:06.757 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:38:06.757 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:38:06.757 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:38:06.757 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:38:06.757 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=161248 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 161248 /var/tmp/spdk-raid.sock 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 161248 ']' 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:06.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:06.758 01:04:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:07.017 [2024-07-25 01:04:29.439123] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:38:07.017 [2024-07-25 01:04:29.439334] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161248 ] 00:38:07.017 [2024-07-25 01:04:29.615781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.276 [2024-07-25 01:04:29.805449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.534 [2024-07-25 01:04:30.011022] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:07.793 01:04:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:07.793 01:04:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:38:07.793 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:38:07.793 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:38:07.793 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:38:07.793 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:38:07.793 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:38:07.793 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:07.793 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:38:07.793 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:07.793 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:38:08.052 malloc1 00:38:08.052 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:08.311 [2024-07-25 01:04:30.726497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:08.311 [2024-07-25 01:04:30.726593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:08.311 [2024-07-25 01:04:30.726651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:38:08.311 [2024-07-25 01:04:30.726674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:08.311 [2024-07-25 01:04:30.728763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:08.311 [2024-07-25 01:04:30.728813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:08.311 pt1 00:38:08.311 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:38:08.311 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:38:08.311 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:38:08.311 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:38:08.311 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:38:08.311 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:08.311 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:38:08.311 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:08.311 01:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:38:08.571 malloc2 00:38:08.571 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:08.571 [2024-07-25 01:04:31.202830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:08.571 [2024-07-25 01:04:31.202939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:08.571 [2024-07-25 01:04:31.202974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:38:08.571 [2024-07-25 01:04:31.202996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:08.571 [2024-07-25 01:04:31.204928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:08.571 [2024-07-25 01:04:31.204977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:08.571 pt2 00:38:08.571 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:38:08.571 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:38:08.571 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:38:08.830 [2024-07-25 01:04:31.378905] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:08.830 [2024-07-25 01:04:31.380863] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:08.830 [2024-07-25 01:04:31.381056] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:38:08.830 [2024-07-25 01:04:31.381068] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:08.830 [2024-07-25 01:04:31.381180] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:38:08.830 [2024-07-25 01:04:31.381290] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:38:08.830 [2024-07-25 01:04:31.381298] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:38:08.830 [2024-07-25 01:04:31.381402] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:08.830 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:08.830 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:08.830 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:08.831 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:08.831 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:08.831 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:08.831 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:08.831 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:08.831 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:08.831 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:08.831 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:08.831 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:09.109 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:09.109 "name": "raid_bdev1", 00:38:09.109 "uuid": "3c9c3328-a620-4d02-ab79-f01f984e5f3c", 00:38:09.109 "strip_size_kb": 0, 00:38:09.109 "state": "online", 00:38:09.109 "raid_level": "raid1", 00:38:09.109 "superblock": true, 00:38:09.109 "num_base_bdevs": 2, 00:38:09.109 "num_base_bdevs_discovered": 2, 00:38:09.109 "num_base_bdevs_operational": 2, 00:38:09.109 "base_bdevs_list": [ 00:38:09.109 { 00:38:09.109 "name": "pt1", 00:38:09.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:09.109 "is_configured": true, 00:38:09.109 "data_offset": 256, 00:38:09.109 "data_size": 7936 00:38:09.109 }, 00:38:09.109 { 00:38:09.109 "name": "pt2", 00:38:09.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:09.109 "is_configured": true, 00:38:09.109 "data_offset": 256, 00:38:09.109 "data_size": 7936 00:38:09.109 } 00:38:09.109 ] 00:38:09.109 }' 00:38:09.109 01:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:09.109 01:04:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:09.690 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:38:09.690 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:38:09.690 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:09.690 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:09.690 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:09.690 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:38:09.690 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:09.690 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:09.690 [2024-07-25 01:04:32.315270] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:09.690 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:09.690 "name": "raid_bdev1", 00:38:09.690 "aliases": [ 00:38:09.690 "3c9c3328-a620-4d02-ab79-f01f984e5f3c" 00:38:09.690 ], 00:38:09.690 "product_name": "Raid Volume", 00:38:09.690 "block_size": 4096, 00:38:09.690 "num_blocks": 7936, 00:38:09.690 "uuid": "3c9c3328-a620-4d02-ab79-f01f984e5f3c", 00:38:09.690 "md_size": 32, 00:38:09.690 "md_interleave": false, 00:38:09.690 "dif_type": 0, 00:38:09.690 "assigned_rate_limits": { 00:38:09.690 "rw_ios_per_sec": 0, 00:38:09.690 "rw_mbytes_per_sec": 0, 00:38:09.690 "r_mbytes_per_sec": 0, 00:38:09.690 "w_mbytes_per_sec": 0 00:38:09.690 }, 00:38:09.690 "claimed": false, 00:38:09.690 "zoned": false, 00:38:09.690 "supported_io_types": { 00:38:09.690 "read": true, 00:38:09.690 "write": true, 00:38:09.690 "unmap": false, 00:38:09.690 "flush": false, 00:38:09.690 "reset": true, 00:38:09.690 "nvme_admin": false, 00:38:09.690 "nvme_io": false, 00:38:09.690 "nvme_io_md": false, 00:38:09.690 "write_zeroes": true, 00:38:09.690 "zcopy": false, 00:38:09.690 "get_zone_info": false, 00:38:09.690 "zone_management": false, 00:38:09.690 "zone_append": false, 00:38:09.690 "compare": false, 00:38:09.690 "compare_and_write": false, 00:38:09.690 "abort": false, 00:38:09.690 "seek_hole": false, 00:38:09.690 "seek_data": false, 00:38:09.690 "copy": false, 00:38:09.690 "nvme_iov_md": false 00:38:09.690 }, 00:38:09.690 "memory_domains": [ 00:38:09.690 { 00:38:09.690 "dma_device_id": "system", 00:38:09.690 "dma_device_type": 1 00:38:09.690 }, 00:38:09.690 { 00:38:09.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.690 "dma_device_type": 2 00:38:09.690 }, 00:38:09.690 { 00:38:09.690 "dma_device_id": "system", 00:38:09.690 "dma_device_type": 1 00:38:09.690 }, 00:38:09.690 { 00:38:09.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.690 "dma_device_type": 2 00:38:09.690 } 00:38:09.690 ], 00:38:09.690 "driver_specific": { 00:38:09.690 "raid": { 00:38:09.690 "uuid": "3c9c3328-a620-4d02-ab79-f01f984e5f3c", 00:38:09.690 "strip_size_kb": 0, 00:38:09.690 "state": "online", 00:38:09.690 "raid_level": "raid1", 00:38:09.690 "superblock": true, 00:38:09.690 "num_base_bdevs": 2, 00:38:09.690 "num_base_bdevs_discovered": 2, 00:38:09.690 "num_base_bdevs_operational": 2, 00:38:09.690 "base_bdevs_list": [ 00:38:09.690 { 00:38:09.690 "name": "pt1", 00:38:09.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:09.690 "is_configured": true, 00:38:09.690 "data_offset": 256, 00:38:09.690 "data_size": 7936 00:38:09.690 }, 00:38:09.690 { 00:38:09.690 "name": "pt2", 00:38:09.690 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:09.690 "is_configured": true, 00:38:09.690 "data_offset": 256, 00:38:09.690 "data_size": 7936 00:38:09.690 } 00:38:09.690 ] 00:38:09.690 } 00:38:09.690 } 00:38:09.691 }' 00:38:09.691 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:09.950 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:38:09.950 pt2' 00:38:09.950 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:09.950 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:38:09.950 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:09.950 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:09.950 "name": "pt1", 00:38:09.950 "aliases": [ 00:38:09.950 "00000000-0000-0000-0000-000000000001" 00:38:09.950 ], 00:38:09.950 "product_name": "passthru", 00:38:09.950 "block_size": 4096, 00:38:09.950 "num_blocks": 8192, 00:38:09.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:09.950 "md_size": 32, 00:38:09.950 "md_interleave": false, 00:38:09.950 "dif_type": 0, 00:38:09.950 "assigned_rate_limits": { 00:38:09.950 "rw_ios_per_sec": 0, 00:38:09.950 "rw_mbytes_per_sec": 0, 00:38:09.950 "r_mbytes_per_sec": 0, 00:38:09.950 "w_mbytes_per_sec": 0 00:38:09.950 }, 00:38:09.950 "claimed": true, 00:38:09.950 "claim_type": "exclusive_write", 00:38:09.950 "zoned": false, 00:38:09.950 "supported_io_types": { 00:38:09.950 "read": true, 00:38:09.950 "write": true, 00:38:09.950 "unmap": true, 00:38:09.950 "flush": true, 00:38:09.950 "reset": true, 00:38:09.950 "nvme_admin": false, 00:38:09.950 "nvme_io": false, 00:38:09.950 "nvme_io_md": false, 00:38:09.950 "write_zeroes": true, 00:38:09.950 "zcopy": true, 00:38:09.950 "get_zone_info": false, 00:38:09.950 "zone_management": false, 00:38:09.950 "zone_append": false, 00:38:09.950 "compare": false, 00:38:09.950 "compare_and_write": false, 00:38:09.950 "abort": true, 00:38:09.950 "seek_hole": false, 00:38:09.950 "seek_data": false, 00:38:09.950 "copy": true, 00:38:09.950 "nvme_iov_md": false 00:38:09.950 }, 00:38:09.950 "memory_domains": [ 00:38:09.950 { 00:38:09.950 "dma_device_id": "system", 00:38:09.950 "dma_device_type": 1 00:38:09.950 }, 00:38:09.950 { 00:38:09.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.950 "dma_device_type": 2 00:38:09.950 } 00:38:09.950 ], 00:38:09.950 "driver_specific": { 00:38:09.950 "passthru": { 00:38:09.950 "name": "pt1", 00:38:09.950 "base_bdev_name": "malloc1" 00:38:09.950 } 00:38:09.950 } 00:38:09.950 }' 00:38:09.950 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:09.950 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:10.209 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:38:10.209 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:10.209 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:10.209 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:10.209 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:10.209 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:10.209 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:38:10.209 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:10.209 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:10.209 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:10.209 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:10.210 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:38:10.210 01:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:10.469 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:10.469 "name": "pt2", 00:38:10.469 "aliases": [ 00:38:10.469 "00000000-0000-0000-0000-000000000002" 00:38:10.469 ], 00:38:10.469 "product_name": "passthru", 00:38:10.469 "block_size": 4096, 00:38:10.469 "num_blocks": 8192, 00:38:10.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:10.469 "md_size": 32, 00:38:10.469 "md_interleave": false, 00:38:10.469 "dif_type": 0, 00:38:10.469 "assigned_rate_limits": { 00:38:10.469 "rw_ios_per_sec": 0, 00:38:10.469 "rw_mbytes_per_sec": 0, 00:38:10.469 "r_mbytes_per_sec": 0, 00:38:10.469 "w_mbytes_per_sec": 0 00:38:10.469 }, 00:38:10.469 "claimed": true, 00:38:10.469 "claim_type": "exclusive_write", 00:38:10.469 "zoned": false, 00:38:10.469 "supported_io_types": { 00:38:10.469 "read": true, 00:38:10.469 "write": true, 00:38:10.469 "unmap": true, 00:38:10.469 "flush": true, 00:38:10.469 "reset": true, 00:38:10.469 "nvme_admin": false, 00:38:10.469 "nvme_io": false, 00:38:10.469 "nvme_io_md": false, 00:38:10.469 "write_zeroes": true, 00:38:10.469 "zcopy": true, 00:38:10.469 "get_zone_info": false, 00:38:10.469 "zone_management": false, 00:38:10.469 "zone_append": false, 00:38:10.469 "compare": false, 00:38:10.469 "compare_and_write": false, 00:38:10.469 "abort": true, 00:38:10.469 "seek_hole": false, 00:38:10.469 "seek_data": false, 00:38:10.469 "copy": true, 00:38:10.469 "nvme_iov_md": false 00:38:10.469 }, 00:38:10.469 "memory_domains": [ 00:38:10.469 { 00:38:10.469 "dma_device_id": "system", 00:38:10.469 "dma_device_type": 1 00:38:10.469 }, 00:38:10.469 { 00:38:10.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:10.469 "dma_device_type": 2 00:38:10.469 } 00:38:10.469 ], 00:38:10.469 "driver_specific": { 00:38:10.469 "passthru": { 00:38:10.469 "name": "pt2", 00:38:10.469 "base_bdev_name": "malloc2" 00:38:10.469 } 00:38:10.469 } 00:38:10.469 }' 00:38:10.469 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:10.469 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:10.469 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:38:10.469 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:10.728 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:10.728 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:10.728 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:10.728 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:10.728 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:38:10.728 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:10.728 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:10.728 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:10.728 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:10.728 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:38:10.985 [2024-07-25 01:04:33.611491] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:10.986 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3c9c3328-a620-4d02-ab79-f01f984e5f3c 00:38:10.986 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 3c9c3328-a620-4d02-ab79-f01f984e5f3c ']' 00:38:10.986 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:11.243 [2024-07-25 01:04:33.879306] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:11.243 [2024-07-25 01:04:33.879337] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:11.243 [2024-07-25 01:04:33.879407] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:11.243 [2024-07-25 01:04:33.879463] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:11.243 [2024-07-25 01:04:33.879472] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:38:11.502 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:38:11.502 01:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:11.761 01:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:38:11.761 01:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:38:11.761 01:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:38:11.761 01:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:38:11.761 01:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:38:11.761 01:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:12.019 01:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:38:12.019 01:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:38:12.277 01:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:38:12.277 01:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:38:12.277 01:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:38:12.278 01:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:38:12.278 01:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:12.278 01:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:12.278 01:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:12.278 01:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:12.278 01:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:12.278 01:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:12.278 01:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:12.278 01:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:38:12.278 01:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:38:12.536 [2024-07-25 01:04:34.995486] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:38:12.536 [2024-07-25 01:04:34.997442] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:38:12.536 [2024-07-25 01:04:34.997511] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:38:12.536 [2024-07-25 01:04:34.997613] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:38:12.536 [2024-07-25 01:04:34.997640] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:12.536 [2024-07-25 01:04:34.997649] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:38:12.536 request: 00:38:12.536 { 00:38:12.536 "name": "raid_bdev1", 00:38:12.536 "raid_level": "raid1", 00:38:12.536 "base_bdevs": [ 00:38:12.536 "malloc1", 00:38:12.536 "malloc2" 00:38:12.536 ], 00:38:12.536 "superblock": false, 00:38:12.536 "method": "bdev_raid_create", 00:38:12.536 "req_id": 1 00:38:12.536 } 00:38:12.536 Got JSON-RPC error response 00:38:12.536 response: 00:38:12.536 { 00:38:12.536 "code": -17, 00:38:12.536 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:38:12.536 } 00:38:12.536 01:04:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:38:12.536 01:04:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:12.536 01:04:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:12.536 01:04:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:12.536 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:12.536 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:38:12.795 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:38:12.795 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:38:12.795 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:13.054 [2024-07-25 01:04:35.491554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:13.054 [2024-07-25 01:04:35.491650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:13.054 [2024-07-25 01:04:35.491698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:13.054 [2024-07-25 01:04:35.491724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:13.054 [2024-07-25 01:04:35.493767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:13.054 [2024-07-25 01:04:35.493838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:13.054 [2024-07-25 01:04:35.493946] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:13.054 [2024-07-25 01:04:35.494023] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:13.054 pt1 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:13.054 "name": "raid_bdev1", 00:38:13.054 "uuid": "3c9c3328-a620-4d02-ab79-f01f984e5f3c", 00:38:13.054 "strip_size_kb": 0, 00:38:13.054 "state": "configuring", 00:38:13.054 "raid_level": "raid1", 00:38:13.054 "superblock": true, 00:38:13.054 "num_base_bdevs": 2, 00:38:13.054 "num_base_bdevs_discovered": 1, 00:38:13.054 "num_base_bdevs_operational": 2, 00:38:13.054 "base_bdevs_list": [ 00:38:13.054 { 00:38:13.054 "name": "pt1", 00:38:13.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:13.054 "is_configured": true, 00:38:13.054 "data_offset": 256, 00:38:13.054 "data_size": 7936 00:38:13.054 }, 00:38:13.054 { 00:38:13.054 "name": null, 00:38:13.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:13.054 "is_configured": false, 00:38:13.054 "data_offset": 256, 00:38:13.054 "data_size": 7936 00:38:13.054 } 00:38:13.054 ] 00:38:13.054 }' 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:13.054 01:04:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:13.621 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:38:13.621 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:38:13.621 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:38:13.621 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:13.880 [2024-07-25 01:04:36.402726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:13.880 [2024-07-25 01:04:36.402808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:13.880 [2024-07-25 01:04:36.402856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:38:13.880 [2024-07-25 01:04:36.402881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:13.880 [2024-07-25 01:04:36.403113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:13.880 [2024-07-25 01:04:36.403157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:13.880 [2024-07-25 01:04:36.403270] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:13.880 [2024-07-25 01:04:36.403288] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:13.880 [2024-07-25 01:04:36.403362] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:38:13.880 [2024-07-25 01:04:36.403369] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:13.880 [2024-07-25 01:04:36.403448] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:38:13.880 [2024-07-25 01:04:36.403530] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:38:13.880 [2024-07-25 01:04:36.403540] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:38:13.880 [2024-07-25 01:04:36.403626] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:13.880 pt2 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:13.880 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:14.139 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:14.139 "name": "raid_bdev1", 00:38:14.139 "uuid": "3c9c3328-a620-4d02-ab79-f01f984e5f3c", 00:38:14.139 "strip_size_kb": 0, 00:38:14.139 "state": "online", 00:38:14.139 "raid_level": "raid1", 00:38:14.139 "superblock": true, 00:38:14.139 "num_base_bdevs": 2, 00:38:14.139 "num_base_bdevs_discovered": 2, 00:38:14.139 "num_base_bdevs_operational": 2, 00:38:14.139 "base_bdevs_list": [ 00:38:14.139 { 00:38:14.139 "name": "pt1", 00:38:14.139 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:14.139 "is_configured": true, 00:38:14.139 "data_offset": 256, 00:38:14.139 "data_size": 7936 00:38:14.139 }, 00:38:14.139 { 00:38:14.139 "name": "pt2", 00:38:14.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:14.139 "is_configured": true, 00:38:14.139 "data_offset": 256, 00:38:14.139 "data_size": 7936 00:38:14.139 } 00:38:14.139 ] 00:38:14.139 }' 00:38:14.139 01:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:14.139 01:04:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:14.706 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:38:14.706 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:38:14.706 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:14.706 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:14.706 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:14.706 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:38:14.706 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:14.706 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:14.965 [2024-07-25 01:04:37.440460] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:14.965 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:14.965 "name": "raid_bdev1", 00:38:14.965 "aliases": [ 00:38:14.965 "3c9c3328-a620-4d02-ab79-f01f984e5f3c" 00:38:14.965 ], 00:38:14.965 "product_name": "Raid Volume", 00:38:14.965 "block_size": 4096, 00:38:14.965 "num_blocks": 7936, 00:38:14.965 "uuid": "3c9c3328-a620-4d02-ab79-f01f984e5f3c", 00:38:14.965 "md_size": 32, 00:38:14.965 "md_interleave": false, 00:38:14.965 "dif_type": 0, 00:38:14.965 "assigned_rate_limits": { 00:38:14.965 "rw_ios_per_sec": 0, 00:38:14.965 "rw_mbytes_per_sec": 0, 00:38:14.965 "r_mbytes_per_sec": 0, 00:38:14.965 "w_mbytes_per_sec": 0 00:38:14.965 }, 00:38:14.965 "claimed": false, 00:38:14.965 "zoned": false, 00:38:14.965 "supported_io_types": { 00:38:14.965 "read": true, 00:38:14.965 "write": true, 00:38:14.965 "unmap": false, 00:38:14.965 "flush": false, 00:38:14.965 "reset": true, 00:38:14.965 "nvme_admin": false, 00:38:14.965 "nvme_io": false, 00:38:14.965 "nvme_io_md": false, 00:38:14.965 "write_zeroes": true, 00:38:14.965 "zcopy": false, 00:38:14.965 "get_zone_info": false, 00:38:14.965 "zone_management": false, 00:38:14.965 "zone_append": false, 00:38:14.965 "compare": false, 00:38:14.965 "compare_and_write": false, 00:38:14.965 "abort": false, 00:38:14.965 "seek_hole": false, 00:38:14.965 "seek_data": false, 00:38:14.965 "copy": false, 00:38:14.965 "nvme_iov_md": false 00:38:14.965 }, 00:38:14.965 "memory_domains": [ 00:38:14.965 { 00:38:14.965 "dma_device_id": "system", 00:38:14.965 "dma_device_type": 1 00:38:14.965 }, 00:38:14.965 { 00:38:14.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:14.965 "dma_device_type": 2 00:38:14.965 }, 00:38:14.965 { 00:38:14.965 "dma_device_id": "system", 00:38:14.965 "dma_device_type": 1 00:38:14.965 }, 00:38:14.965 { 00:38:14.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:14.965 "dma_device_type": 2 00:38:14.965 } 00:38:14.965 ], 00:38:14.965 "driver_specific": { 00:38:14.965 "raid": { 00:38:14.965 "uuid": "3c9c3328-a620-4d02-ab79-f01f984e5f3c", 00:38:14.965 "strip_size_kb": 0, 00:38:14.965 "state": "online", 00:38:14.965 "raid_level": "raid1", 00:38:14.965 "superblock": true, 00:38:14.965 "num_base_bdevs": 2, 00:38:14.965 "num_base_bdevs_discovered": 2, 00:38:14.965 "num_base_bdevs_operational": 2, 00:38:14.965 "base_bdevs_list": [ 00:38:14.965 { 00:38:14.965 "name": "pt1", 00:38:14.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:14.965 "is_configured": true, 00:38:14.965 "data_offset": 256, 00:38:14.965 "data_size": 7936 00:38:14.965 }, 00:38:14.965 { 00:38:14.965 "name": "pt2", 00:38:14.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:14.965 "is_configured": true, 00:38:14.965 "data_offset": 256, 00:38:14.965 "data_size": 7936 00:38:14.965 } 00:38:14.965 ] 00:38:14.965 } 00:38:14.965 } 00:38:14.965 }' 00:38:14.965 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:14.965 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:38:14.965 pt2' 00:38:14.965 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:14.965 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:38:14.965 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:15.224 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:15.224 "name": "pt1", 00:38:15.224 "aliases": [ 00:38:15.224 "00000000-0000-0000-0000-000000000001" 00:38:15.224 ], 00:38:15.224 "product_name": "passthru", 00:38:15.224 "block_size": 4096, 00:38:15.224 "num_blocks": 8192, 00:38:15.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:15.224 "md_size": 32, 00:38:15.224 "md_interleave": false, 00:38:15.224 "dif_type": 0, 00:38:15.224 "assigned_rate_limits": { 00:38:15.224 "rw_ios_per_sec": 0, 00:38:15.224 "rw_mbytes_per_sec": 0, 00:38:15.224 "r_mbytes_per_sec": 0, 00:38:15.224 "w_mbytes_per_sec": 0 00:38:15.224 }, 00:38:15.224 "claimed": true, 00:38:15.224 "claim_type": "exclusive_write", 00:38:15.224 "zoned": false, 00:38:15.224 "supported_io_types": { 00:38:15.224 "read": true, 00:38:15.224 "write": true, 00:38:15.224 "unmap": true, 00:38:15.224 "flush": true, 00:38:15.224 "reset": true, 00:38:15.224 "nvme_admin": false, 00:38:15.224 "nvme_io": false, 00:38:15.224 "nvme_io_md": false, 00:38:15.224 "write_zeroes": true, 00:38:15.224 "zcopy": true, 00:38:15.224 "get_zone_info": false, 00:38:15.224 "zone_management": false, 00:38:15.224 "zone_append": false, 00:38:15.224 "compare": false, 00:38:15.224 "compare_and_write": false, 00:38:15.224 "abort": true, 00:38:15.224 "seek_hole": false, 00:38:15.224 "seek_data": false, 00:38:15.224 "copy": true, 00:38:15.224 "nvme_iov_md": false 00:38:15.224 }, 00:38:15.224 "memory_domains": [ 00:38:15.224 { 00:38:15.224 "dma_device_id": "system", 00:38:15.224 "dma_device_type": 1 00:38:15.224 }, 00:38:15.224 { 00:38:15.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:15.224 "dma_device_type": 2 00:38:15.224 } 00:38:15.224 ], 00:38:15.224 "driver_specific": { 00:38:15.225 "passthru": { 00:38:15.225 "name": "pt1", 00:38:15.225 "base_bdev_name": "malloc1" 00:38:15.225 } 00:38:15.225 } 00:38:15.225 }' 00:38:15.225 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:15.225 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:15.225 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:38:15.225 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:15.225 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:15.225 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:15.225 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:15.225 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:15.483 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:38:15.483 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:15.484 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:15.484 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:15.484 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:15.484 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:38:15.484 01:04:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:15.742 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:15.742 "name": "pt2", 00:38:15.742 "aliases": [ 00:38:15.742 "00000000-0000-0000-0000-000000000002" 00:38:15.742 ], 00:38:15.742 "product_name": "passthru", 00:38:15.742 "block_size": 4096, 00:38:15.743 "num_blocks": 8192, 00:38:15.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:15.743 "md_size": 32, 00:38:15.743 "md_interleave": false, 00:38:15.743 "dif_type": 0, 00:38:15.743 "assigned_rate_limits": { 00:38:15.743 "rw_ios_per_sec": 0, 00:38:15.743 "rw_mbytes_per_sec": 0, 00:38:15.743 "r_mbytes_per_sec": 0, 00:38:15.743 "w_mbytes_per_sec": 0 00:38:15.743 }, 00:38:15.743 "claimed": true, 00:38:15.743 "claim_type": "exclusive_write", 00:38:15.743 "zoned": false, 00:38:15.743 "supported_io_types": { 00:38:15.743 "read": true, 00:38:15.743 "write": true, 00:38:15.743 "unmap": true, 00:38:15.743 "flush": true, 00:38:15.743 "reset": true, 00:38:15.743 "nvme_admin": false, 00:38:15.743 "nvme_io": false, 00:38:15.743 "nvme_io_md": false, 00:38:15.743 "write_zeroes": true, 00:38:15.743 "zcopy": true, 00:38:15.743 "get_zone_info": false, 00:38:15.743 "zone_management": false, 00:38:15.743 "zone_append": false, 00:38:15.743 "compare": false, 00:38:15.743 "compare_and_write": false, 00:38:15.743 "abort": true, 00:38:15.743 "seek_hole": false, 00:38:15.743 "seek_data": false, 00:38:15.743 "copy": true, 00:38:15.743 "nvme_iov_md": false 00:38:15.743 }, 00:38:15.743 "memory_domains": [ 00:38:15.743 { 00:38:15.743 "dma_device_id": "system", 00:38:15.743 "dma_device_type": 1 00:38:15.743 }, 00:38:15.743 { 00:38:15.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:15.743 "dma_device_type": 2 00:38:15.743 } 00:38:15.743 ], 00:38:15.743 "driver_specific": { 00:38:15.743 "passthru": { 00:38:15.743 "name": "pt2", 00:38:15.743 "base_bdev_name": "malloc2" 00:38:15.743 } 00:38:15.743 } 00:38:15.743 }' 00:38:15.743 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:15.743 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:15.743 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:38:15.743 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:15.743 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:15.743 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:15.743 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:15.743 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:15.743 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:38:16.002 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:16.002 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:16.002 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:16.002 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:38:16.002 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:16.002 [2024-07-25 01:04:38.650986] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:16.260 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 3c9c3328-a620-4d02-ab79-f01f984e5f3c '!=' 3c9c3328-a620-4d02-ab79-f01f984e5f3c ']' 00:38:16.260 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:38:16.260 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:38:16.260 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:38:16.260 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:38:16.519 [2024-07-25 01:04:38.922869] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:16.519 01:04:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:16.777 01:04:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:16.777 "name": "raid_bdev1", 00:38:16.777 "uuid": "3c9c3328-a620-4d02-ab79-f01f984e5f3c", 00:38:16.777 "strip_size_kb": 0, 00:38:16.777 "state": "online", 00:38:16.777 "raid_level": "raid1", 00:38:16.777 "superblock": true, 00:38:16.777 "num_base_bdevs": 2, 00:38:16.777 "num_base_bdevs_discovered": 1, 00:38:16.777 "num_base_bdevs_operational": 1, 00:38:16.777 "base_bdevs_list": [ 00:38:16.777 { 00:38:16.777 "name": null, 00:38:16.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:16.777 "is_configured": false, 00:38:16.777 "data_offset": 256, 00:38:16.777 "data_size": 7936 00:38:16.777 }, 00:38:16.777 { 00:38:16.777 "name": "pt2", 00:38:16.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:16.777 "is_configured": true, 00:38:16.777 "data_offset": 256, 00:38:16.777 "data_size": 7936 00:38:16.777 } 00:38:16.777 ] 00:38:16.777 }' 00:38:16.777 01:04:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:16.777 01:04:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:17.036 01:04:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:17.293 [2024-07-25 01:04:39.895023] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:17.293 [2024-07-25 01:04:39.895054] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:17.293 [2024-07-25 01:04:39.895120] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:17.293 [2024-07-25 01:04:39.895168] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:17.293 [2024-07-25 01:04:39.895177] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:38:17.293 01:04:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:17.293 01:04:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:38:17.552 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:38:17.552 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:38:17.552 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:38:17.552 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:38:17.552 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:17.811 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:38:17.811 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:38:17.811 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:38:17.811 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:38:17.811 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:38:17.811 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:18.070 [2024-07-25 01:04:40.495108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:18.070 [2024-07-25 01:04:40.495196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:18.070 [2024-07-25 01:04:40.495225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:38:18.070 [2024-07-25 01:04:40.495250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:18.070 [2024-07-25 01:04:40.497411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:18.070 [2024-07-25 01:04:40.497483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:18.070 [2024-07-25 01:04:40.497616] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:18.070 [2024-07-25 01:04:40.497680] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:18.070 [2024-07-25 01:04:40.497762] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:38:18.070 [2024-07-25 01:04:40.497784] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:18.070 [2024-07-25 01:04:40.497913] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:38:18.070 [2024-07-25 01:04:40.498029] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:38:18.070 [2024-07-25 01:04:40.498054] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:38:18.070 [2024-07-25 01:04:40.498171] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:18.070 pt2 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:18.070 "name": "raid_bdev1", 00:38:18.070 "uuid": "3c9c3328-a620-4d02-ab79-f01f984e5f3c", 00:38:18.070 "strip_size_kb": 0, 00:38:18.070 "state": "online", 00:38:18.070 "raid_level": "raid1", 00:38:18.070 "superblock": true, 00:38:18.070 "num_base_bdevs": 2, 00:38:18.070 "num_base_bdevs_discovered": 1, 00:38:18.070 "num_base_bdevs_operational": 1, 00:38:18.070 "base_bdevs_list": [ 00:38:18.070 { 00:38:18.070 "name": null, 00:38:18.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:18.070 "is_configured": false, 00:38:18.070 "data_offset": 256, 00:38:18.070 "data_size": 7936 00:38:18.070 }, 00:38:18.070 { 00:38:18.070 "name": "pt2", 00:38:18.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:18.070 "is_configured": true, 00:38:18.070 "data_offset": 256, 00:38:18.070 "data_size": 7936 00:38:18.070 } 00:38:18.070 ] 00:38:18.070 }' 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:18.070 01:04:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:18.637 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:18.896 [2024-07-25 01:04:41.467244] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:18.896 [2024-07-25 01:04:41.467284] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:18.896 [2024-07-25 01:04:41.467365] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:18.896 [2024-07-25 01:04:41.467413] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:18.896 [2024-07-25 01:04:41.467421] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:38:18.896 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:18.896 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:38:19.155 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:38:19.155 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:38:19.155 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:38:19.155 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:19.415 [2024-07-25 01:04:41.883341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:19.415 [2024-07-25 01:04:41.883421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:19.415 [2024-07-25 01:04:41.883472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:38:19.415 [2024-07-25 01:04:41.883494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:19.415 [2024-07-25 01:04:41.885699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:19.415 [2024-07-25 01:04:41.885777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:19.415 [2024-07-25 01:04:41.885887] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:19.415 [2024-07-25 01:04:41.885942] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:19.415 [2024-07-25 01:04:41.886056] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:38:19.415 [2024-07-25 01:04:41.886071] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:19.415 [2024-07-25 01:04:41.886102] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:38:19.415 [2024-07-25 01:04:41.886180] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:19.415 [2024-07-25 01:04:41.886306] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:38:19.415 [2024-07-25 01:04:41.886322] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:19.415 [2024-07-25 01:04:41.886446] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:38:19.415 [2024-07-25 01:04:41.886557] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:38:19.415 [2024-07-25 01:04:41.886572] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:38:19.415 [2024-07-25 01:04:41.886687] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:19.415 pt1 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:19.415 01:04:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:19.674 01:04:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:19.674 "name": "raid_bdev1", 00:38:19.674 "uuid": "3c9c3328-a620-4d02-ab79-f01f984e5f3c", 00:38:19.674 "strip_size_kb": 0, 00:38:19.674 "state": "online", 00:38:19.674 "raid_level": "raid1", 00:38:19.674 "superblock": true, 00:38:19.674 "num_base_bdevs": 2, 00:38:19.674 "num_base_bdevs_discovered": 1, 00:38:19.674 "num_base_bdevs_operational": 1, 00:38:19.674 "base_bdevs_list": [ 00:38:19.674 { 00:38:19.674 "name": null, 00:38:19.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:19.674 "is_configured": false, 00:38:19.674 "data_offset": 256, 00:38:19.674 "data_size": 7936 00:38:19.674 }, 00:38:19.674 { 00:38:19.674 "name": "pt2", 00:38:19.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:19.674 "is_configured": true, 00:38:19.674 "data_offset": 256, 00:38:19.674 "data_size": 7936 00:38:19.674 } 00:38:19.674 ] 00:38:19.674 }' 00:38:19.674 01:04:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:19.674 01:04:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:20.242 01:04:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:38:20.242 01:04:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:38:20.510 01:04:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:38:20.510 01:04:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:20.510 01:04:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:38:20.510 [2024-07-25 01:04:43.151866] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 3c9c3328-a620-4d02-ab79-f01f984e5f3c '!=' 3c9c3328-a620-4d02-ab79-f01f984e5f3c ']' 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 161248 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 161248 ']' 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 161248 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 161248 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:20.782 killing process with pid 161248 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 161248' 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 161248 00:38:20.782 [2024-07-25 01:04:43.204459] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:20.782 01:04:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 161248 00:38:20.782 [2024-07-25 01:04:43.204549] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:20.782 [2024-07-25 01:04:43.204615] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:20.782 [2024-07-25 01:04:43.204632] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:38:20.782 [2024-07-25 01:04:43.423677] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:22.156 01:04:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:38:22.156 00:38:22.156 real 0m15.379s 00:38:22.156 user 0m27.067s 00:38:22.156 sys 0m2.252s 00:38:22.156 01:04:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:22.156 01:04:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:22.156 ************************************ 00:38:22.156 END TEST raid_superblock_test_md_separate 00:38:22.156 ************************************ 00:38:22.156 01:04:44 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:38:22.156 01:04:44 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:38:22.156 01:04:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:38:22.156 01:04:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:22.156 01:04:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:22.415 ************************************ 00:38:22.415 START TEST raid_rebuild_test_sb_md_separate 00:38:22.415 ************************************ 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=161761 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 161761 /var/tmp/spdk-raid.sock 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 161761 ']' 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:22.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:22.415 01:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:22.415 [2024-07-25 01:04:44.876160] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:38:22.415 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:22.415 Zero copy mechanism will not be used. 00:38:22.415 [2024-07-25 01:04:44.876331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161761 ] 00:38:22.415 [2024-07-25 01:04:45.037708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:22.674 [2024-07-25 01:04:45.272086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:22.933 [2024-07-25 01:04:45.457651] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:23.192 01:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:23.192 01:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:38:23.192 01:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:23.192 01:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:38:23.759 BaseBdev1_malloc 00:38:23.759 01:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:23.759 [2024-07-25 01:04:46.373080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:23.759 [2024-07-25 01:04:46.373200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:23.759 [2024-07-25 01:04:46.373241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:38:23.759 [2024-07-25 01:04:46.373261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:23.759 [2024-07-25 01:04:46.375387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:23.759 [2024-07-25 01:04:46.375436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:23.759 BaseBdev1 00:38:23.759 01:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:23.759 01:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:38:24.018 BaseBdev2_malloc 00:38:24.018 01:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:24.277 [2024-07-25 01:04:46.779674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:24.277 [2024-07-25 01:04:46.779804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:24.277 [2024-07-25 01:04:46.779841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:38:24.277 [2024-07-25 01:04:46.779860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:24.277 [2024-07-25 01:04:46.781860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:24.277 [2024-07-25 01:04:46.781909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:24.277 BaseBdev2 00:38:24.277 01:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:38:24.536 spare_malloc 00:38:24.536 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:24.536 spare_delay 00:38:24.795 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:24.795 [2024-07-25 01:04:47.356690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:24.795 [2024-07-25 01:04:47.356782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:24.795 [2024-07-25 01:04:47.356838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:24.795 [2024-07-25 01:04:47.356864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:24.795 [2024-07-25 01:04:47.358927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:24.795 [2024-07-25 01:04:47.358980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:24.795 spare 00:38:24.795 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:38:25.054 [2024-07-25 01:04:47.528769] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:25.054 [2024-07-25 01:04:47.530730] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:25.054 [2024-07-25 01:04:47.530979] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:38:25.054 [2024-07-25 01:04:47.530992] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:25.054 [2024-07-25 01:04:47.531149] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:38:25.054 [2024-07-25 01:04:47.531248] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:38:25.054 [2024-07-25 01:04:47.531257] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:38:25.054 [2024-07-25 01:04:47.531353] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:25.054 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:25.054 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:25.054 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:25.054 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:25.054 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:25.054 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:25.054 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:25.054 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:25.054 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:25.054 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:25.054 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:25.055 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:25.314 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:25.314 "name": "raid_bdev1", 00:38:25.314 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:25.314 "strip_size_kb": 0, 00:38:25.314 "state": "online", 00:38:25.314 "raid_level": "raid1", 00:38:25.314 "superblock": true, 00:38:25.314 "num_base_bdevs": 2, 00:38:25.314 "num_base_bdevs_discovered": 2, 00:38:25.314 "num_base_bdevs_operational": 2, 00:38:25.314 "base_bdevs_list": [ 00:38:25.314 { 00:38:25.314 "name": "BaseBdev1", 00:38:25.314 "uuid": "cdbc25d1-3b6d-5184-9a07-fa7fc1fc468a", 00:38:25.314 "is_configured": true, 00:38:25.314 "data_offset": 256, 00:38:25.314 "data_size": 7936 00:38:25.314 }, 00:38:25.314 { 00:38:25.314 "name": "BaseBdev2", 00:38:25.314 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:25.314 "is_configured": true, 00:38:25.314 "data_offset": 256, 00:38:25.314 "data_size": 7936 00:38:25.314 } 00:38:25.314 ] 00:38:25.314 }' 00:38:25.314 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:25.314 01:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:25.883 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:25.883 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:38:25.883 [2024-07-25 01:04:48.497126] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:25.883 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:38:25.883 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:25.883 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:26.142 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:26.401 [2024-07-25 01:04:48.861015] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:38:26.401 /dev/nbd0 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:26.401 1+0 records in 00:38:26.401 1+0 records out 00:38:26.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289652 s, 14.1 MB/s 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:38:26.401 01:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:38:26.969 7936+0 records in 00:38:26.969 7936+0 records out 00:38:26.969 32505856 bytes (33 MB, 31 MiB) copied, 0.659692 s, 49.3 MB/s 00:38:26.969 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:38:26.969 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:26.969 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:26.969 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:26.969 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:38:26.969 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:26.969 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:27.228 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:27.228 [2024-07-25 01:04:49.848869] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:27.228 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:27.228 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:27.228 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:27.228 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:27.228 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:27.228 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:38:27.228 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:38:27.228 01:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:38:27.488 [2024-07-25 01:04:50.112633] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.488 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:27.746 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:27.746 "name": "raid_bdev1", 00:38:27.746 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:27.746 "strip_size_kb": 0, 00:38:27.746 "state": "online", 00:38:27.746 "raid_level": "raid1", 00:38:27.746 "superblock": true, 00:38:27.746 "num_base_bdevs": 2, 00:38:27.746 "num_base_bdevs_discovered": 1, 00:38:27.746 "num_base_bdevs_operational": 1, 00:38:27.746 "base_bdevs_list": [ 00:38:27.746 { 00:38:27.746 "name": null, 00:38:27.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:27.746 "is_configured": false, 00:38:27.746 "data_offset": 256, 00:38:27.746 "data_size": 7936 00:38:27.746 }, 00:38:27.746 { 00:38:27.746 "name": "BaseBdev2", 00:38:27.746 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:27.746 "is_configured": true, 00:38:27.746 "data_offset": 256, 00:38:27.746 "data_size": 7936 00:38:27.746 } 00:38:27.746 ] 00:38:27.746 }' 00:38:27.746 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:27.746 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:28.314 01:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:28.571 [2024-07-25 01:04:51.040787] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:28.571 [2024-07-25 01:04:51.054878] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018cff0 00:38:28.571 [2024-07-25 01:04:51.056812] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:28.571 01:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:38:29.505 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:29.505 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:29.505 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:29.505 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:29.505 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:29.505 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:29.505 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:29.764 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:29.764 "name": "raid_bdev1", 00:38:29.764 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:29.764 "strip_size_kb": 0, 00:38:29.764 "state": "online", 00:38:29.764 "raid_level": "raid1", 00:38:29.764 "superblock": true, 00:38:29.764 "num_base_bdevs": 2, 00:38:29.764 "num_base_bdevs_discovered": 2, 00:38:29.764 "num_base_bdevs_operational": 2, 00:38:29.764 "process": { 00:38:29.764 "type": "rebuild", 00:38:29.764 "target": "spare", 00:38:29.764 "progress": { 00:38:29.764 "blocks": 3072, 00:38:29.764 "percent": 38 00:38:29.764 } 00:38:29.764 }, 00:38:29.764 "base_bdevs_list": [ 00:38:29.764 { 00:38:29.764 "name": "spare", 00:38:29.764 "uuid": "618ca72e-9e79-5e4f-8eb2-7494ffd06457", 00:38:29.764 "is_configured": true, 00:38:29.764 "data_offset": 256, 00:38:29.764 "data_size": 7936 00:38:29.764 }, 00:38:29.764 { 00:38:29.764 "name": "BaseBdev2", 00:38:29.764 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:29.764 "is_configured": true, 00:38:29.764 "data_offset": 256, 00:38:29.764 "data_size": 7936 00:38:29.764 } 00:38:29.764 ] 00:38:29.764 }' 00:38:29.764 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:29.764 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:29.764 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:29.764 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:29.764 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:30.022 [2024-07-25 01:04:52.614920] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:30.022 [2024-07-25 01:04:52.666084] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:30.022 [2024-07-25 01:04:52.666164] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:30.022 [2024-07-25 01:04:52.666178] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:30.022 [2024-07-25 01:04:52.666185] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:30.281 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:30.281 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:30.281 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:30.281 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:30.281 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:30.282 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:30.282 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:30.282 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:30.282 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:30.282 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:30.282 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:30.282 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:30.282 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:30.282 "name": "raid_bdev1", 00:38:30.282 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:30.282 "strip_size_kb": 0, 00:38:30.282 "state": "online", 00:38:30.282 "raid_level": "raid1", 00:38:30.282 "superblock": true, 00:38:30.282 "num_base_bdevs": 2, 00:38:30.282 "num_base_bdevs_discovered": 1, 00:38:30.282 "num_base_bdevs_operational": 1, 00:38:30.282 "base_bdevs_list": [ 00:38:30.282 { 00:38:30.282 "name": null, 00:38:30.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:30.282 "is_configured": false, 00:38:30.282 "data_offset": 256, 00:38:30.282 "data_size": 7936 00:38:30.282 }, 00:38:30.282 { 00:38:30.282 "name": "BaseBdev2", 00:38:30.282 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:30.282 "is_configured": true, 00:38:30.282 "data_offset": 256, 00:38:30.282 "data_size": 7936 00:38:30.282 } 00:38:30.282 ] 00:38:30.282 }' 00:38:30.282 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:30.282 01:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:30.850 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:30.850 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:30.850 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:30.850 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:30.850 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:30.850 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:30.850 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:31.108 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:31.108 "name": "raid_bdev1", 00:38:31.108 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:31.108 "strip_size_kb": 0, 00:38:31.108 "state": "online", 00:38:31.108 "raid_level": "raid1", 00:38:31.108 "superblock": true, 00:38:31.108 "num_base_bdevs": 2, 00:38:31.108 "num_base_bdevs_discovered": 1, 00:38:31.108 "num_base_bdevs_operational": 1, 00:38:31.108 "base_bdevs_list": [ 00:38:31.108 { 00:38:31.108 "name": null, 00:38:31.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:31.108 "is_configured": false, 00:38:31.108 "data_offset": 256, 00:38:31.108 "data_size": 7936 00:38:31.108 }, 00:38:31.108 { 00:38:31.108 "name": "BaseBdev2", 00:38:31.108 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:31.108 "is_configured": true, 00:38:31.108 "data_offset": 256, 00:38:31.108 "data_size": 7936 00:38:31.108 } 00:38:31.108 ] 00:38:31.108 }' 00:38:31.108 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:31.108 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:31.108 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:31.108 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:31.108 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:31.366 [2024-07-25 01:04:53.813915] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:31.366 [2024-07-25 01:04:53.828130] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:38:31.366 [2024-07-25 01:04:53.830057] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:31.366 01:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:38:32.301 01:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:32.301 01:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:32.301 01:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:32.301 01:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:32.301 01:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:32.301 01:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:32.301 01:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:32.559 "name": "raid_bdev1", 00:38:32.559 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:32.559 "strip_size_kb": 0, 00:38:32.559 "state": "online", 00:38:32.559 "raid_level": "raid1", 00:38:32.559 "superblock": true, 00:38:32.559 "num_base_bdevs": 2, 00:38:32.559 "num_base_bdevs_discovered": 2, 00:38:32.559 "num_base_bdevs_operational": 2, 00:38:32.559 "process": { 00:38:32.559 "type": "rebuild", 00:38:32.559 "target": "spare", 00:38:32.559 "progress": { 00:38:32.559 "blocks": 2816, 00:38:32.559 "percent": 35 00:38:32.559 } 00:38:32.559 }, 00:38:32.559 "base_bdevs_list": [ 00:38:32.559 { 00:38:32.559 "name": "spare", 00:38:32.559 "uuid": "618ca72e-9e79-5e4f-8eb2-7494ffd06457", 00:38:32.559 "is_configured": true, 00:38:32.559 "data_offset": 256, 00:38:32.559 "data_size": 7936 00:38:32.559 }, 00:38:32.559 { 00:38:32.559 "name": "BaseBdev2", 00:38:32.559 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:32.559 "is_configured": true, 00:38:32.559 "data_offset": 256, 00:38:32.559 "data_size": 7936 00:38:32.559 } 00:38:32.559 ] 00:38:32.559 }' 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:38:32.559 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1372 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:32.559 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:32.560 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:32.560 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:32.818 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:32.818 "name": "raid_bdev1", 00:38:32.818 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:32.818 "strip_size_kb": 0, 00:38:32.818 "state": "online", 00:38:32.818 "raid_level": "raid1", 00:38:32.818 "superblock": true, 00:38:32.818 "num_base_bdevs": 2, 00:38:32.818 "num_base_bdevs_discovered": 2, 00:38:32.818 "num_base_bdevs_operational": 2, 00:38:32.818 "process": { 00:38:32.818 "type": "rebuild", 00:38:32.818 "target": "spare", 00:38:32.818 "progress": { 00:38:32.818 "blocks": 3840, 00:38:32.818 "percent": 48 00:38:32.818 } 00:38:32.818 }, 00:38:32.818 "base_bdevs_list": [ 00:38:32.818 { 00:38:32.818 "name": "spare", 00:38:32.818 "uuid": "618ca72e-9e79-5e4f-8eb2-7494ffd06457", 00:38:32.818 "is_configured": true, 00:38:32.818 "data_offset": 256, 00:38:32.818 "data_size": 7936 00:38:32.818 }, 00:38:32.818 { 00:38:32.818 "name": "BaseBdev2", 00:38:32.818 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:32.818 "is_configured": true, 00:38:32.818 "data_offset": 256, 00:38:32.818 "data_size": 7936 00:38:32.818 } 00:38:32.818 ] 00:38:32.818 }' 00:38:32.818 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:32.818 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:32.818 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:32.818 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:32.818 01:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:34.194 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:34.195 "name": "raid_bdev1", 00:38:34.195 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:34.195 "strip_size_kb": 0, 00:38:34.195 "state": "online", 00:38:34.195 "raid_level": "raid1", 00:38:34.195 "superblock": true, 00:38:34.195 "num_base_bdevs": 2, 00:38:34.195 "num_base_bdevs_discovered": 2, 00:38:34.195 "num_base_bdevs_operational": 2, 00:38:34.195 "process": { 00:38:34.195 "type": "rebuild", 00:38:34.195 "target": "spare", 00:38:34.195 "progress": { 00:38:34.195 "blocks": 7168, 00:38:34.195 "percent": 90 00:38:34.195 } 00:38:34.195 }, 00:38:34.195 "base_bdevs_list": [ 00:38:34.195 { 00:38:34.195 "name": "spare", 00:38:34.195 "uuid": "618ca72e-9e79-5e4f-8eb2-7494ffd06457", 00:38:34.195 "is_configured": true, 00:38:34.195 "data_offset": 256, 00:38:34.195 "data_size": 7936 00:38:34.195 }, 00:38:34.195 { 00:38:34.195 "name": "BaseBdev2", 00:38:34.195 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:34.195 "is_configured": true, 00:38:34.195 "data_offset": 256, 00:38:34.195 "data_size": 7936 00:38:34.195 } 00:38:34.195 ] 00:38:34.195 }' 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:34.195 01:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:34.453 [2024-07-25 01:04:56.947797] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:34.453 [2024-07-25 01:04:56.947860] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:34.453 [2024-07-25 01:04:56.947986] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:35.389 01:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:35.389 01:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:35.389 01:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:35.389 01:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:35.389 01:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:35.390 01:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:35.390 01:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:35.390 01:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:35.649 "name": "raid_bdev1", 00:38:35.649 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:35.649 "strip_size_kb": 0, 00:38:35.649 "state": "online", 00:38:35.649 "raid_level": "raid1", 00:38:35.649 "superblock": true, 00:38:35.649 "num_base_bdevs": 2, 00:38:35.649 "num_base_bdevs_discovered": 2, 00:38:35.649 "num_base_bdevs_operational": 2, 00:38:35.649 "base_bdevs_list": [ 00:38:35.649 { 00:38:35.649 "name": "spare", 00:38:35.649 "uuid": "618ca72e-9e79-5e4f-8eb2-7494ffd06457", 00:38:35.649 "is_configured": true, 00:38:35.649 "data_offset": 256, 00:38:35.649 "data_size": 7936 00:38:35.649 }, 00:38:35.649 { 00:38:35.649 "name": "BaseBdev2", 00:38:35.649 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:35.649 "is_configured": true, 00:38:35.649 "data_offset": 256, 00:38:35.649 "data_size": 7936 00:38:35.649 } 00:38:35.649 ] 00:38:35.649 }' 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:35.649 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:35.907 "name": "raid_bdev1", 00:38:35.907 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:35.907 "strip_size_kb": 0, 00:38:35.907 "state": "online", 00:38:35.907 "raid_level": "raid1", 00:38:35.907 "superblock": true, 00:38:35.907 "num_base_bdevs": 2, 00:38:35.907 "num_base_bdevs_discovered": 2, 00:38:35.907 "num_base_bdevs_operational": 2, 00:38:35.907 "base_bdevs_list": [ 00:38:35.907 { 00:38:35.907 "name": "spare", 00:38:35.907 "uuid": "618ca72e-9e79-5e4f-8eb2-7494ffd06457", 00:38:35.907 "is_configured": true, 00:38:35.907 "data_offset": 256, 00:38:35.907 "data_size": 7936 00:38:35.907 }, 00:38:35.907 { 00:38:35.907 "name": "BaseBdev2", 00:38:35.907 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:35.907 "is_configured": true, 00:38:35.907 "data_offset": 256, 00:38:35.907 "data_size": 7936 00:38:35.907 } 00:38:35.907 ] 00:38:35.907 }' 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:35.907 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:36.166 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:36.166 "name": "raid_bdev1", 00:38:36.166 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:36.166 "strip_size_kb": 0, 00:38:36.166 "state": "online", 00:38:36.166 "raid_level": "raid1", 00:38:36.166 "superblock": true, 00:38:36.166 "num_base_bdevs": 2, 00:38:36.166 "num_base_bdevs_discovered": 2, 00:38:36.166 "num_base_bdevs_operational": 2, 00:38:36.166 "base_bdevs_list": [ 00:38:36.166 { 00:38:36.166 "name": "spare", 00:38:36.166 "uuid": "618ca72e-9e79-5e4f-8eb2-7494ffd06457", 00:38:36.166 "is_configured": true, 00:38:36.166 "data_offset": 256, 00:38:36.166 "data_size": 7936 00:38:36.166 }, 00:38:36.166 { 00:38:36.166 "name": "BaseBdev2", 00:38:36.166 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:36.166 "is_configured": true, 00:38:36.166 "data_offset": 256, 00:38:36.166 "data_size": 7936 00:38:36.166 } 00:38:36.166 ] 00:38:36.166 }' 00:38:36.166 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:36.166 01:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.732 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:36.991 [2024-07-25 01:04:59.556881] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:36.991 [2024-07-25 01:04:59.556912] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:36.991 [2024-07-25 01:04:59.557007] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:36.991 [2024-07-25 01:04:59.557070] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:36.991 [2024-07-25 01:04:59.557079] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:38:36.991 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:38:36.991 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:37.249 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:38:37.249 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:38:37.249 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:38:37.249 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:37.249 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:37.249 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:38:37.250 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:37.250 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:37.250 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:37.250 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:38:37.250 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:37.250 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:37.250 01:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:37.508 /dev/nbd0 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:37.508 1+0 records in 00:38:37.508 1+0 records out 00:38:37.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612732 s, 6.7 MB/s 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:37.508 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:38:37.766 /dev/nbd1 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:37.766 1+0 records in 00:38:37.766 1+0 records out 00:38:37.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582278 s, 7.0 MB/s 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:37.766 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:38:38.024 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:38:38.024 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:38.024 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:38.024 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:38.024 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:38:38.024 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:38.024 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:38.282 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:38.282 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:38.282 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:38.282 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:38.282 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:38.282 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:38.282 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:38:38.282 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:38:38.282 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:38.282 01:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:38:38.540 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:38.540 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:38.540 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:38.540 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:38.540 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:38.540 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:38.540 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:38:38.540 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:38:38.540 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:38:38.540 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:38.799 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:39.057 [2024-07-25 01:05:01.502085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:39.057 [2024-07-25 01:05:01.502176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:39.057 [2024-07-25 01:05:01.502238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:38:39.057 [2024-07-25 01:05:01.502258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:39.057 [2024-07-25 01:05:01.504337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:39.057 [2024-07-25 01:05:01.504387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:39.057 [2024-07-25 01:05:01.504489] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:39.057 [2024-07-25 01:05:01.504543] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:39.057 [2024-07-25 01:05:01.504677] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:39.057 spare 00:38:39.057 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:39.058 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:39.058 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:39.058 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:39.058 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:39.058 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:39.058 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:39.058 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:39.058 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:39.058 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:39.058 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:39.058 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:39.058 [2024-07-25 01:05:01.604757] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:38:39.058 [2024-07-25 01:05:01.604777] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:39.058 [2024-07-25 01:05:01.604914] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:38:39.058 [2024-07-25 01:05:01.605036] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:38:39.058 [2024-07-25 01:05:01.605049] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:38:39.058 [2024-07-25 01:05:01.605145] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:39.316 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:39.316 "name": "raid_bdev1", 00:38:39.316 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:39.316 "strip_size_kb": 0, 00:38:39.316 "state": "online", 00:38:39.316 "raid_level": "raid1", 00:38:39.316 "superblock": true, 00:38:39.316 "num_base_bdevs": 2, 00:38:39.316 "num_base_bdevs_discovered": 2, 00:38:39.316 "num_base_bdevs_operational": 2, 00:38:39.316 "base_bdevs_list": [ 00:38:39.316 { 00:38:39.316 "name": "spare", 00:38:39.316 "uuid": "618ca72e-9e79-5e4f-8eb2-7494ffd06457", 00:38:39.316 "is_configured": true, 00:38:39.316 "data_offset": 256, 00:38:39.316 "data_size": 7936 00:38:39.316 }, 00:38:39.316 { 00:38:39.316 "name": "BaseBdev2", 00:38:39.316 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:39.316 "is_configured": true, 00:38:39.316 "data_offset": 256, 00:38:39.316 "data_size": 7936 00:38:39.316 } 00:38:39.316 ] 00:38:39.316 }' 00:38:39.316 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:39.316 01:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:39.883 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:39.883 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:39.883 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:39.883 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:39.883 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:39.883 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:39.883 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.142 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:40.142 "name": "raid_bdev1", 00:38:40.142 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:40.142 "strip_size_kb": 0, 00:38:40.142 "state": "online", 00:38:40.142 "raid_level": "raid1", 00:38:40.142 "superblock": true, 00:38:40.142 "num_base_bdevs": 2, 00:38:40.142 "num_base_bdevs_discovered": 2, 00:38:40.142 "num_base_bdevs_operational": 2, 00:38:40.142 "base_bdevs_list": [ 00:38:40.142 { 00:38:40.142 "name": "spare", 00:38:40.142 "uuid": "618ca72e-9e79-5e4f-8eb2-7494ffd06457", 00:38:40.142 "is_configured": true, 00:38:40.142 "data_offset": 256, 00:38:40.142 "data_size": 7936 00:38:40.142 }, 00:38:40.142 { 00:38:40.142 "name": "BaseBdev2", 00:38:40.142 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:40.142 "is_configured": true, 00:38:40.142 "data_offset": 256, 00:38:40.142 "data_size": 7936 00:38:40.142 } 00:38:40.142 ] 00:38:40.142 }' 00:38:40.142 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:40.142 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:40.142 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:40.142 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:40.142 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:40.142 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:40.400 [2024-07-25 01:05:02.974458] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:40.400 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:40.401 01:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.659 01:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:40.659 "name": "raid_bdev1", 00:38:40.659 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:40.659 "strip_size_kb": 0, 00:38:40.659 "state": "online", 00:38:40.659 "raid_level": "raid1", 00:38:40.659 "superblock": true, 00:38:40.659 "num_base_bdevs": 2, 00:38:40.659 "num_base_bdevs_discovered": 1, 00:38:40.659 "num_base_bdevs_operational": 1, 00:38:40.659 "base_bdevs_list": [ 00:38:40.659 { 00:38:40.659 "name": null, 00:38:40.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:40.659 "is_configured": false, 00:38:40.659 "data_offset": 256, 00:38:40.659 "data_size": 7936 00:38:40.659 }, 00:38:40.659 { 00:38:40.659 "name": "BaseBdev2", 00:38:40.659 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:40.659 "is_configured": true, 00:38:40.659 "data_offset": 256, 00:38:40.659 "data_size": 7936 00:38:40.659 } 00:38:40.659 ] 00:38:40.659 }' 00:38:40.659 01:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:40.659 01:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.297 01:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:41.558 [2024-07-25 01:05:03.914909] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:41.558 [2024-07-25 01:05:03.915092] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:41.558 [2024-07-25 01:05:03.915107] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:41.558 [2024-07-25 01:05:03.915155] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:41.558 [2024-07-25 01:05:03.928985] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1dc0 00:38:41.558 [2024-07-25 01:05:03.930895] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:41.558 01:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:38:42.495 01:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:42.495 01:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:42.495 01:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:42.495 01:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:42.495 01:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:42.495 01:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:42.495 01:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:42.754 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:42.754 "name": "raid_bdev1", 00:38:42.754 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:42.754 "strip_size_kb": 0, 00:38:42.754 "state": "online", 00:38:42.754 "raid_level": "raid1", 00:38:42.754 "superblock": true, 00:38:42.754 "num_base_bdevs": 2, 00:38:42.754 "num_base_bdevs_discovered": 2, 00:38:42.754 "num_base_bdevs_operational": 2, 00:38:42.754 "process": { 00:38:42.754 "type": "rebuild", 00:38:42.754 "target": "spare", 00:38:42.754 "progress": { 00:38:42.754 "blocks": 3072, 00:38:42.754 "percent": 38 00:38:42.754 } 00:38:42.754 }, 00:38:42.754 "base_bdevs_list": [ 00:38:42.754 { 00:38:42.754 "name": "spare", 00:38:42.754 "uuid": "618ca72e-9e79-5e4f-8eb2-7494ffd06457", 00:38:42.754 "is_configured": true, 00:38:42.754 "data_offset": 256, 00:38:42.754 "data_size": 7936 00:38:42.754 }, 00:38:42.754 { 00:38:42.754 "name": "BaseBdev2", 00:38:42.754 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:42.754 "is_configured": true, 00:38:42.754 "data_offset": 256, 00:38:42.754 "data_size": 7936 00:38:42.754 } 00:38:42.754 ] 00:38:42.754 }' 00:38:42.754 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:42.754 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:42.754 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:42.754 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:42.754 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:43.013 [2024-07-25 01:05:05.460957] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:43.013 [2024-07-25 01:05:05.540017] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:43.013 [2024-07-25 01:05:05.540108] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:43.013 [2024-07-25 01:05:05.540124] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:43.013 [2024-07-25 01:05:05.540132] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:43.013 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:43.273 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:43.273 "name": "raid_bdev1", 00:38:43.273 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:43.273 "strip_size_kb": 0, 00:38:43.273 "state": "online", 00:38:43.273 "raid_level": "raid1", 00:38:43.273 "superblock": true, 00:38:43.273 "num_base_bdevs": 2, 00:38:43.273 "num_base_bdevs_discovered": 1, 00:38:43.273 "num_base_bdevs_operational": 1, 00:38:43.273 "base_bdevs_list": [ 00:38:43.273 { 00:38:43.273 "name": null, 00:38:43.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.273 "is_configured": false, 00:38:43.273 "data_offset": 256, 00:38:43.273 "data_size": 7936 00:38:43.273 }, 00:38:43.273 { 00:38:43.273 "name": "BaseBdev2", 00:38:43.273 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:43.273 "is_configured": true, 00:38:43.273 "data_offset": 256, 00:38:43.273 "data_size": 7936 00:38:43.273 } 00:38:43.273 ] 00:38:43.273 }' 00:38:43.273 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:43.273 01:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:43.841 01:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:44.100 [2024-07-25 01:05:06.604551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:44.100 [2024-07-25 01:05:06.604641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:44.100 [2024-07-25 01:05:06.604675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:38:44.100 [2024-07-25 01:05:06.604699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:44.100 [2024-07-25 01:05:06.604994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:44.100 [2024-07-25 01:05:06.605024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:44.100 [2024-07-25 01:05:06.605131] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:44.100 [2024-07-25 01:05:06.605142] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:44.100 [2024-07-25 01:05:06.605150] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:44.100 [2024-07-25 01:05:06.605193] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:44.100 [2024-07-25 01:05:06.618697] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:38:44.100 spare 00:38:44.100 [2024-07-25 01:05:06.620582] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:44.100 01:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:38:45.036 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:45.037 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:45.037 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:45.037 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:45.037 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:45.037 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:45.037 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:45.296 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:45.296 "name": "raid_bdev1", 00:38:45.296 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:45.296 "strip_size_kb": 0, 00:38:45.296 "state": "online", 00:38:45.296 "raid_level": "raid1", 00:38:45.296 "superblock": true, 00:38:45.296 "num_base_bdevs": 2, 00:38:45.296 "num_base_bdevs_discovered": 2, 00:38:45.296 "num_base_bdevs_operational": 2, 00:38:45.296 "process": { 00:38:45.296 "type": "rebuild", 00:38:45.296 "target": "spare", 00:38:45.296 "progress": { 00:38:45.296 "blocks": 3072, 00:38:45.296 "percent": 38 00:38:45.296 } 00:38:45.296 }, 00:38:45.296 "base_bdevs_list": [ 00:38:45.296 { 00:38:45.296 "name": "spare", 00:38:45.296 "uuid": "618ca72e-9e79-5e4f-8eb2-7494ffd06457", 00:38:45.296 "is_configured": true, 00:38:45.296 "data_offset": 256, 00:38:45.296 "data_size": 7936 00:38:45.296 }, 00:38:45.296 { 00:38:45.296 "name": "BaseBdev2", 00:38:45.296 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:45.296 "is_configured": true, 00:38:45.296 "data_offset": 256, 00:38:45.296 "data_size": 7936 00:38:45.296 } 00:38:45.296 ] 00:38:45.296 }' 00:38:45.296 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:45.296 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:45.296 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:45.555 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:45.555 01:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:45.555 [2024-07-25 01:05:08.122772] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:45.555 [2024-07-25 01:05:08.129226] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:45.555 [2024-07-25 01:05:08.129308] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:45.555 [2024-07-25 01:05:08.129323] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:45.555 [2024-07-25 01:05:08.129330] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:45.555 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:45.815 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:45.815 "name": "raid_bdev1", 00:38:45.815 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:45.815 "strip_size_kb": 0, 00:38:45.815 "state": "online", 00:38:45.815 "raid_level": "raid1", 00:38:45.815 "superblock": true, 00:38:45.815 "num_base_bdevs": 2, 00:38:45.815 "num_base_bdevs_discovered": 1, 00:38:45.815 "num_base_bdevs_operational": 1, 00:38:45.815 "base_bdevs_list": [ 00:38:45.815 { 00:38:45.815 "name": null, 00:38:45.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:45.815 "is_configured": false, 00:38:45.815 "data_offset": 256, 00:38:45.815 "data_size": 7936 00:38:45.815 }, 00:38:45.815 { 00:38:45.815 "name": "BaseBdev2", 00:38:45.815 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:45.815 "is_configured": true, 00:38:45.815 "data_offset": 256, 00:38:45.815 "data_size": 7936 00:38:45.815 } 00:38:45.815 ] 00:38:45.815 }' 00:38:45.815 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:45.815 01:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:46.752 "name": "raid_bdev1", 00:38:46.752 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:46.752 "strip_size_kb": 0, 00:38:46.752 "state": "online", 00:38:46.752 "raid_level": "raid1", 00:38:46.752 "superblock": true, 00:38:46.752 "num_base_bdevs": 2, 00:38:46.752 "num_base_bdevs_discovered": 1, 00:38:46.752 "num_base_bdevs_operational": 1, 00:38:46.752 "base_bdevs_list": [ 00:38:46.752 { 00:38:46.752 "name": null, 00:38:46.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:46.752 "is_configured": false, 00:38:46.752 "data_offset": 256, 00:38:46.752 "data_size": 7936 00:38:46.752 }, 00:38:46.752 { 00:38:46.752 "name": "BaseBdev2", 00:38:46.752 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:46.752 "is_configured": true, 00:38:46.752 "data_offset": 256, 00:38:46.752 "data_size": 7936 00:38:46.752 } 00:38:46.752 ] 00:38:46.752 }' 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:46.752 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:38:47.012 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:47.269 [2024-07-25 01:05:09.816933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:47.269 [2024-07-25 01:05:09.817033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:47.269 [2024-07-25 01:05:09.817070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:38:47.269 [2024-07-25 01:05:09.817093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:47.269 [2024-07-25 01:05:09.817299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:47.269 [2024-07-25 01:05:09.817321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:47.269 [2024-07-25 01:05:09.817440] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:47.269 [2024-07-25 01:05:09.817473] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:47.269 [2024-07-25 01:05:09.817481] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:47.269 BaseBdev1 00:38:47.269 01:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:48.206 01:05:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:48.465 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:48.465 "name": "raid_bdev1", 00:38:48.465 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:48.465 "strip_size_kb": 0, 00:38:48.465 "state": "online", 00:38:48.465 "raid_level": "raid1", 00:38:48.465 "superblock": true, 00:38:48.465 "num_base_bdevs": 2, 00:38:48.465 "num_base_bdevs_discovered": 1, 00:38:48.465 "num_base_bdevs_operational": 1, 00:38:48.465 "base_bdevs_list": [ 00:38:48.465 { 00:38:48.465 "name": null, 00:38:48.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:48.465 "is_configured": false, 00:38:48.465 "data_offset": 256, 00:38:48.465 "data_size": 7936 00:38:48.465 }, 00:38:48.465 { 00:38:48.465 "name": "BaseBdev2", 00:38:48.465 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:48.465 "is_configured": true, 00:38:48.465 "data_offset": 256, 00:38:48.465 "data_size": 7936 00:38:48.465 } 00:38:48.465 ] 00:38:48.465 }' 00:38:48.465 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:48.465 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:49.033 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:49.033 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:49.033 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:49.033 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:49.033 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:49.033 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:49.033 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:49.292 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:49.292 "name": "raid_bdev1", 00:38:49.292 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:49.292 "strip_size_kb": 0, 00:38:49.292 "state": "online", 00:38:49.292 "raid_level": "raid1", 00:38:49.292 "superblock": true, 00:38:49.292 "num_base_bdevs": 2, 00:38:49.292 "num_base_bdevs_discovered": 1, 00:38:49.292 "num_base_bdevs_operational": 1, 00:38:49.292 "base_bdevs_list": [ 00:38:49.292 { 00:38:49.292 "name": null, 00:38:49.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:49.292 "is_configured": false, 00:38:49.292 "data_offset": 256, 00:38:49.292 "data_size": 7936 00:38:49.292 }, 00:38:49.292 { 00:38:49.292 "name": "BaseBdev2", 00:38:49.292 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:49.292 "is_configured": true, 00:38:49.292 "data_offset": 256, 00:38:49.292 "data_size": 7936 00:38:49.292 } 00:38:49.292 ] 00:38:49.292 }' 00:38:49.292 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:49.292 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:49.292 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:49.292 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:49.292 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:49.292 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:38:49.292 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:49.292 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:49.292 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:49.292 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:49.551 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:49.551 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:49.551 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:49.551 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:49.551 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:38:49.551 01:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:49.551 [2024-07-25 01:05:12.109388] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:49.551 [2024-07-25 01:05:12.109546] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:49.551 [2024-07-25 01:05:12.109558] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:49.551 request: 00:38:49.551 { 00:38:49.551 "base_bdev": "BaseBdev1", 00:38:49.551 "raid_bdev": "raid_bdev1", 00:38:49.551 "method": "bdev_raid_add_base_bdev", 00:38:49.551 "req_id": 1 00:38:49.551 } 00:38:49.551 Got JSON-RPC error response 00:38:49.551 response: 00:38:49.551 { 00:38:49.551 "code": -22, 00:38:49.551 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:49.551 } 00:38:49.551 01:05:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:38:49.551 01:05:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:49.551 01:05:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:49.551 01:05:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:49.551 01:05:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:50.487 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:50.747 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:50.747 "name": "raid_bdev1", 00:38:50.747 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:50.747 "strip_size_kb": 0, 00:38:50.747 "state": "online", 00:38:50.747 "raid_level": "raid1", 00:38:50.747 "superblock": true, 00:38:50.747 "num_base_bdevs": 2, 00:38:50.747 "num_base_bdevs_discovered": 1, 00:38:50.747 "num_base_bdevs_operational": 1, 00:38:50.747 "base_bdevs_list": [ 00:38:50.747 { 00:38:50.747 "name": null, 00:38:50.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:50.747 "is_configured": false, 00:38:50.747 "data_offset": 256, 00:38:50.747 "data_size": 7936 00:38:50.747 }, 00:38:50.747 { 00:38:50.747 "name": "BaseBdev2", 00:38:50.747 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:50.747 "is_configured": true, 00:38:50.747 "data_offset": 256, 00:38:50.747 "data_size": 7936 00:38:50.747 } 00:38:50.747 ] 00:38:50.747 }' 00:38:50.747 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:50.747 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:51.312 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:51.312 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:51.312 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:51.312 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:51.312 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:51.312 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:51.312 01:05:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:51.568 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:51.568 "name": "raid_bdev1", 00:38:51.568 "uuid": "3aaf89df-75d5-449b-9811-8cf7455cc9c4", 00:38:51.568 "strip_size_kb": 0, 00:38:51.568 "state": "online", 00:38:51.568 "raid_level": "raid1", 00:38:51.568 "superblock": true, 00:38:51.568 "num_base_bdevs": 2, 00:38:51.568 "num_base_bdevs_discovered": 1, 00:38:51.568 "num_base_bdevs_operational": 1, 00:38:51.568 "base_bdevs_list": [ 00:38:51.568 { 00:38:51.568 "name": null, 00:38:51.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:51.568 "is_configured": false, 00:38:51.568 "data_offset": 256, 00:38:51.568 "data_size": 7936 00:38:51.568 }, 00:38:51.568 { 00:38:51.568 "name": "BaseBdev2", 00:38:51.568 "uuid": "004e86e9-8af4-5a06-b478-d91f15852545", 00:38:51.568 "is_configured": true, 00:38:51.568 "data_offset": 256, 00:38:51.568 "data_size": 7936 00:38:51.568 } 00:38:51.568 ] 00:38:51.568 }' 00:38:51.568 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:51.568 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:51.568 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:51.568 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:51.568 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 161761 00:38:51.568 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 161761 ']' 00:38:51.568 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 161761 00:38:51.568 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:38:51.568 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:51.568 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 161761 00:38:51.892 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:51.892 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:51.892 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 161761' 00:38:51.892 killing process with pid 161761 00:38:51.892 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 161761 00:38:51.893 Received shutdown signal, test time was about 60.000000 seconds 00:38:51.893 00:38:51.893 Latency(us) 00:38:51.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:51.893 =================================================================================================================== 00:38:51.893 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:51.893 [2024-07-25 01:05:14.234879] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:51.893 [2024-07-25 01:05:14.234989] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:51.893 [2024-07-25 01:05:14.235037] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:51.893 [2024-07-25 01:05:14.235046] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:38:51.893 01:05:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 161761 00:38:51.893 [2024-07-25 01:05:14.525410] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:53.270 ************************************ 00:38:53.270 END TEST raid_rebuild_test_sb_md_separate 00:38:53.270 ************************************ 00:38:53.270 01:05:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:38:53.270 00:38:53.270 real 0m30.921s 00:38:53.270 user 0m47.508s 00:38:53.270 sys 0m4.141s 00:38:53.270 01:05:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:53.270 01:05:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:53.270 01:05:15 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:38:53.270 01:05:15 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:38:53.270 01:05:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:38:53.270 01:05:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:53.270 01:05:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:53.270 ************************************ 00:38:53.270 START TEST raid_state_function_test_sb_md_interleaved 00:38:53.270 ************************************ 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=162636 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 162636' 00:38:53.270 Process raid pid: 162636 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 162636 /var/tmp/spdk-raid.sock 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 162636 ']' 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:53.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:53.270 01:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:53.270 [2024-07-25 01:05:15.893194] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:38:53.270 [2024-07-25 01:05:15.893409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:53.529 [2024-07-25 01:05:16.071489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.788 [2024-07-25 01:05:16.274678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.047 [2024-07-25 01:05:16.478367] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:54.306 [2024-07-25 01:05:16.892561] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:54.306 [2024-07-25 01:05:16.892665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:54.306 [2024-07-25 01:05:16.892676] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:54.306 [2024-07-25 01:05:16.892702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:54.306 01:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:54.564 01:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:54.564 "name": "Existed_Raid", 00:38:54.564 "uuid": "b6b07be8-20a5-4894-a938-961dc9495f6e", 00:38:54.564 "strip_size_kb": 0, 00:38:54.564 "state": "configuring", 00:38:54.564 "raid_level": "raid1", 00:38:54.564 "superblock": true, 00:38:54.564 "num_base_bdevs": 2, 00:38:54.564 "num_base_bdevs_discovered": 0, 00:38:54.564 "num_base_bdevs_operational": 2, 00:38:54.564 "base_bdevs_list": [ 00:38:54.564 { 00:38:54.564 "name": "BaseBdev1", 00:38:54.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:54.564 "is_configured": false, 00:38:54.564 "data_offset": 0, 00:38:54.564 "data_size": 0 00:38:54.565 }, 00:38:54.565 { 00:38:54.565 "name": "BaseBdev2", 00:38:54.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:54.565 "is_configured": false, 00:38:54.565 "data_offset": 0, 00:38:54.565 "data_size": 0 00:38:54.565 } 00:38:54.565 ] 00:38:54.565 }' 00:38:54.565 01:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:54.565 01:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:55.132 01:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:55.132 [2024-07-25 01:05:17.764612] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:55.132 [2024-07-25 01:05:17.764645] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:38:55.132 01:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:55.391 [2024-07-25 01:05:18.028684] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:55.391 [2024-07-25 01:05:18.028757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:55.391 [2024-07-25 01:05:18.028766] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:55.391 [2024-07-25 01:05:18.028791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:55.650 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:38:55.650 [2024-07-25 01:05:18.239715] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:55.650 BaseBdev1 00:38:55.650 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:38:55.650 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:38:55.650 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:55.650 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:38:55.650 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:55.650 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:55.650 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:55.909 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:56.169 [ 00:38:56.169 { 00:38:56.169 "name": "BaseBdev1", 00:38:56.169 "aliases": [ 00:38:56.169 "95040dc8-448e-45a9-9199-0d99451102a0" 00:38:56.169 ], 00:38:56.169 "product_name": "Malloc disk", 00:38:56.169 "block_size": 4128, 00:38:56.169 "num_blocks": 8192, 00:38:56.169 "uuid": "95040dc8-448e-45a9-9199-0d99451102a0", 00:38:56.169 "md_size": 32, 00:38:56.169 "md_interleave": true, 00:38:56.169 "dif_type": 0, 00:38:56.169 "assigned_rate_limits": { 00:38:56.169 "rw_ios_per_sec": 0, 00:38:56.169 "rw_mbytes_per_sec": 0, 00:38:56.169 "r_mbytes_per_sec": 0, 00:38:56.169 "w_mbytes_per_sec": 0 00:38:56.169 }, 00:38:56.169 "claimed": true, 00:38:56.169 "claim_type": "exclusive_write", 00:38:56.169 "zoned": false, 00:38:56.169 "supported_io_types": { 00:38:56.169 "read": true, 00:38:56.169 "write": true, 00:38:56.169 "unmap": true, 00:38:56.169 "flush": true, 00:38:56.169 "reset": true, 00:38:56.169 "nvme_admin": false, 00:38:56.169 "nvme_io": false, 00:38:56.169 "nvme_io_md": false, 00:38:56.169 "write_zeroes": true, 00:38:56.169 "zcopy": true, 00:38:56.169 "get_zone_info": false, 00:38:56.169 "zone_management": false, 00:38:56.169 "zone_append": false, 00:38:56.169 "compare": false, 00:38:56.169 "compare_and_write": false, 00:38:56.169 "abort": true, 00:38:56.169 "seek_hole": false, 00:38:56.169 "seek_data": false, 00:38:56.169 "copy": true, 00:38:56.169 "nvme_iov_md": false 00:38:56.169 }, 00:38:56.169 "memory_domains": [ 00:38:56.169 { 00:38:56.169 "dma_device_id": "system", 00:38:56.169 "dma_device_type": 1 00:38:56.169 }, 00:38:56.169 { 00:38:56.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:56.169 "dma_device_type": 2 00:38:56.169 } 00:38:56.169 ], 00:38:56.169 "driver_specific": {} 00:38:56.169 } 00:38:56.169 ] 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:56.169 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:56.428 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:56.428 "name": "Existed_Raid", 00:38:56.428 "uuid": "a1c2b42f-fad1-48b4-b6f3-48206fecbba9", 00:38:56.428 "strip_size_kb": 0, 00:38:56.428 "state": "configuring", 00:38:56.428 "raid_level": "raid1", 00:38:56.428 "superblock": true, 00:38:56.428 "num_base_bdevs": 2, 00:38:56.428 "num_base_bdevs_discovered": 1, 00:38:56.428 "num_base_bdevs_operational": 2, 00:38:56.428 "base_bdevs_list": [ 00:38:56.428 { 00:38:56.428 "name": "BaseBdev1", 00:38:56.428 "uuid": "95040dc8-448e-45a9-9199-0d99451102a0", 00:38:56.428 "is_configured": true, 00:38:56.428 "data_offset": 256, 00:38:56.428 "data_size": 7936 00:38:56.428 }, 00:38:56.428 { 00:38:56.428 "name": "BaseBdev2", 00:38:56.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:56.428 "is_configured": false, 00:38:56.428 "data_offset": 0, 00:38:56.428 "data_size": 0 00:38:56.428 } 00:38:56.428 ] 00:38:56.428 }' 00:38:56.428 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:56.428 01:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:56.996 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:56.996 [2024-07-25 01:05:19.616021] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:56.996 [2024-07-25 01:05:19.616077] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:38:56.996 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:57.255 [2024-07-25 01:05:19.868089] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:57.255 [2024-07-25 01:05:19.870072] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:57.255 [2024-07-25 01:05:19.870143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:57.255 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:38:57.255 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:38:57.255 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:57.255 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:57.255 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:57.255 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:57.255 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:57.255 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:57.256 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:57.256 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:57.256 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:57.256 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:57.256 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:57.256 01:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:57.514 01:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:57.514 "name": "Existed_Raid", 00:38:57.514 "uuid": "b0683d23-ed88-45be-9f71-1c51a5d5ca18", 00:38:57.514 "strip_size_kb": 0, 00:38:57.514 "state": "configuring", 00:38:57.514 "raid_level": "raid1", 00:38:57.514 "superblock": true, 00:38:57.514 "num_base_bdevs": 2, 00:38:57.514 "num_base_bdevs_discovered": 1, 00:38:57.514 "num_base_bdevs_operational": 2, 00:38:57.514 "base_bdevs_list": [ 00:38:57.514 { 00:38:57.514 "name": "BaseBdev1", 00:38:57.514 "uuid": "95040dc8-448e-45a9-9199-0d99451102a0", 00:38:57.514 "is_configured": true, 00:38:57.514 "data_offset": 256, 00:38:57.514 "data_size": 7936 00:38:57.514 }, 00:38:57.514 { 00:38:57.514 "name": "BaseBdev2", 00:38:57.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:57.514 "is_configured": false, 00:38:57.514 "data_offset": 0, 00:38:57.514 "data_size": 0 00:38:57.514 } 00:38:57.514 ] 00:38:57.514 }' 00:38:57.514 01:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:57.514 01:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:58.081 01:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:38:58.340 [2024-07-25 01:05:20.859207] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:58.340 [2024-07-25 01:05:20.859420] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:38:58.340 [2024-07-25 01:05:20.859432] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:58.340 [2024-07-25 01:05:20.859526] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:38:58.340 [2024-07-25 01:05:20.859609] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:38:58.340 [2024-07-25 01:05:20.859617] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:38:58.340 [2024-07-25 01:05:20.859675] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:58.340 BaseBdev2 00:38:58.340 01:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:38:58.340 01:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:38:58.340 01:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:58.340 01:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:38:58.340 01:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:58.340 01:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:58.340 01:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:58.598 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:58.857 [ 00:38:58.857 { 00:38:58.857 "name": "BaseBdev2", 00:38:58.857 "aliases": [ 00:38:58.857 "0ef8d536-fdae-4582-93b7-f21a24774e2b" 00:38:58.857 ], 00:38:58.857 "product_name": "Malloc disk", 00:38:58.857 "block_size": 4128, 00:38:58.857 "num_blocks": 8192, 00:38:58.857 "uuid": "0ef8d536-fdae-4582-93b7-f21a24774e2b", 00:38:58.857 "md_size": 32, 00:38:58.857 "md_interleave": true, 00:38:58.857 "dif_type": 0, 00:38:58.857 "assigned_rate_limits": { 00:38:58.857 "rw_ios_per_sec": 0, 00:38:58.857 "rw_mbytes_per_sec": 0, 00:38:58.857 "r_mbytes_per_sec": 0, 00:38:58.857 "w_mbytes_per_sec": 0 00:38:58.857 }, 00:38:58.857 "claimed": true, 00:38:58.857 "claim_type": "exclusive_write", 00:38:58.857 "zoned": false, 00:38:58.857 "supported_io_types": { 00:38:58.857 "read": true, 00:38:58.857 "write": true, 00:38:58.857 "unmap": true, 00:38:58.857 "flush": true, 00:38:58.857 "reset": true, 00:38:58.857 "nvme_admin": false, 00:38:58.857 "nvme_io": false, 00:38:58.857 "nvme_io_md": false, 00:38:58.857 "write_zeroes": true, 00:38:58.857 "zcopy": true, 00:38:58.857 "get_zone_info": false, 00:38:58.857 "zone_management": false, 00:38:58.857 "zone_append": false, 00:38:58.857 "compare": false, 00:38:58.857 "compare_and_write": false, 00:38:58.857 "abort": true, 00:38:58.857 "seek_hole": false, 00:38:58.857 "seek_data": false, 00:38:58.857 "copy": true, 00:38:58.857 "nvme_iov_md": false 00:38:58.857 }, 00:38:58.857 "memory_domains": [ 00:38:58.857 { 00:38:58.857 "dma_device_id": "system", 00:38:58.857 "dma_device_type": 1 00:38:58.857 }, 00:38:58.857 { 00:38:58.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:58.857 "dma_device_type": 2 00:38:58.857 } 00:38:58.857 ], 00:38:58.857 "driver_specific": {} 00:38:58.857 } 00:38:58.857 ] 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:58.857 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:58.858 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:58.858 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:59.117 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:59.117 "name": "Existed_Raid", 00:38:59.117 "uuid": "b0683d23-ed88-45be-9f71-1c51a5d5ca18", 00:38:59.117 "strip_size_kb": 0, 00:38:59.117 "state": "online", 00:38:59.117 "raid_level": "raid1", 00:38:59.117 "superblock": true, 00:38:59.117 "num_base_bdevs": 2, 00:38:59.117 "num_base_bdevs_discovered": 2, 00:38:59.117 "num_base_bdevs_operational": 2, 00:38:59.117 "base_bdevs_list": [ 00:38:59.117 { 00:38:59.117 "name": "BaseBdev1", 00:38:59.117 "uuid": "95040dc8-448e-45a9-9199-0d99451102a0", 00:38:59.117 "is_configured": true, 00:38:59.117 "data_offset": 256, 00:38:59.117 "data_size": 7936 00:38:59.117 }, 00:38:59.117 { 00:38:59.117 "name": "BaseBdev2", 00:38:59.117 "uuid": "0ef8d536-fdae-4582-93b7-f21a24774e2b", 00:38:59.117 "is_configured": true, 00:38:59.117 "data_offset": 256, 00:38:59.117 "data_size": 7936 00:38:59.117 } 00:38:59.117 ] 00:38:59.117 }' 00:38:59.117 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:59.117 01:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:59.685 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:38:59.685 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:38:59.685 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:59.685 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:59.685 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:59.685 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:38:59.685 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:38:59.685 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:59.685 [2024-07-25 01:05:22.307837] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:59.685 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:59.685 "name": "Existed_Raid", 00:38:59.685 "aliases": [ 00:38:59.685 "b0683d23-ed88-45be-9f71-1c51a5d5ca18" 00:38:59.685 ], 00:38:59.685 "product_name": "Raid Volume", 00:38:59.685 "block_size": 4128, 00:38:59.685 "num_blocks": 7936, 00:38:59.685 "uuid": "b0683d23-ed88-45be-9f71-1c51a5d5ca18", 00:38:59.685 "md_size": 32, 00:38:59.685 "md_interleave": true, 00:38:59.685 "dif_type": 0, 00:38:59.685 "assigned_rate_limits": { 00:38:59.685 "rw_ios_per_sec": 0, 00:38:59.685 "rw_mbytes_per_sec": 0, 00:38:59.685 "r_mbytes_per_sec": 0, 00:38:59.685 "w_mbytes_per_sec": 0 00:38:59.685 }, 00:38:59.685 "claimed": false, 00:38:59.685 "zoned": false, 00:38:59.685 "supported_io_types": { 00:38:59.685 "read": true, 00:38:59.685 "write": true, 00:38:59.685 "unmap": false, 00:38:59.685 "flush": false, 00:38:59.685 "reset": true, 00:38:59.685 "nvme_admin": false, 00:38:59.685 "nvme_io": false, 00:38:59.685 "nvme_io_md": false, 00:38:59.685 "write_zeroes": true, 00:38:59.685 "zcopy": false, 00:38:59.685 "get_zone_info": false, 00:38:59.685 "zone_management": false, 00:38:59.685 "zone_append": false, 00:38:59.685 "compare": false, 00:38:59.685 "compare_and_write": false, 00:38:59.685 "abort": false, 00:38:59.685 "seek_hole": false, 00:38:59.685 "seek_data": false, 00:38:59.685 "copy": false, 00:38:59.685 "nvme_iov_md": false 00:38:59.685 }, 00:38:59.685 "memory_domains": [ 00:38:59.685 { 00:38:59.685 "dma_device_id": "system", 00:38:59.685 "dma_device_type": 1 00:38:59.685 }, 00:38:59.685 { 00:38:59.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:59.685 "dma_device_type": 2 00:38:59.685 }, 00:38:59.685 { 00:38:59.685 "dma_device_id": "system", 00:38:59.685 "dma_device_type": 1 00:38:59.685 }, 00:38:59.685 { 00:38:59.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:59.685 "dma_device_type": 2 00:38:59.685 } 00:38:59.685 ], 00:38:59.685 "driver_specific": { 00:38:59.685 "raid": { 00:38:59.685 "uuid": "b0683d23-ed88-45be-9f71-1c51a5d5ca18", 00:38:59.685 "strip_size_kb": 0, 00:38:59.685 "state": "online", 00:38:59.685 "raid_level": "raid1", 00:38:59.685 "superblock": true, 00:38:59.685 "num_base_bdevs": 2, 00:38:59.685 "num_base_bdevs_discovered": 2, 00:38:59.685 "num_base_bdevs_operational": 2, 00:38:59.685 "base_bdevs_list": [ 00:38:59.685 { 00:38:59.685 "name": "BaseBdev1", 00:38:59.685 "uuid": "95040dc8-448e-45a9-9199-0d99451102a0", 00:38:59.685 "is_configured": true, 00:38:59.685 "data_offset": 256, 00:38:59.685 "data_size": 7936 00:38:59.685 }, 00:38:59.685 { 00:38:59.685 "name": "BaseBdev2", 00:38:59.685 "uuid": "0ef8d536-fdae-4582-93b7-f21a24774e2b", 00:38:59.685 "is_configured": true, 00:38:59.685 "data_offset": 256, 00:38:59.685 "data_size": 7936 00:38:59.685 } 00:38:59.685 ] 00:38:59.685 } 00:38:59.685 } 00:38:59.685 }' 00:38:59.686 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:59.945 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:38:59.945 BaseBdev2' 00:38:59.945 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:59.945 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:38:59.945 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:59.945 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:59.945 "name": "BaseBdev1", 00:38:59.945 "aliases": [ 00:38:59.945 "95040dc8-448e-45a9-9199-0d99451102a0" 00:38:59.945 ], 00:38:59.945 "product_name": "Malloc disk", 00:38:59.945 "block_size": 4128, 00:38:59.945 "num_blocks": 8192, 00:38:59.945 "uuid": "95040dc8-448e-45a9-9199-0d99451102a0", 00:38:59.945 "md_size": 32, 00:38:59.945 "md_interleave": true, 00:38:59.945 "dif_type": 0, 00:38:59.945 "assigned_rate_limits": { 00:38:59.945 "rw_ios_per_sec": 0, 00:38:59.945 "rw_mbytes_per_sec": 0, 00:38:59.945 "r_mbytes_per_sec": 0, 00:38:59.945 "w_mbytes_per_sec": 0 00:38:59.945 }, 00:38:59.945 "claimed": true, 00:38:59.945 "claim_type": "exclusive_write", 00:38:59.945 "zoned": false, 00:38:59.945 "supported_io_types": { 00:38:59.945 "read": true, 00:38:59.945 "write": true, 00:38:59.945 "unmap": true, 00:38:59.945 "flush": true, 00:38:59.945 "reset": true, 00:38:59.945 "nvme_admin": false, 00:38:59.945 "nvme_io": false, 00:38:59.945 "nvme_io_md": false, 00:38:59.945 "write_zeroes": true, 00:38:59.945 "zcopy": true, 00:38:59.945 "get_zone_info": false, 00:38:59.945 "zone_management": false, 00:38:59.945 "zone_append": false, 00:38:59.945 "compare": false, 00:38:59.945 "compare_and_write": false, 00:38:59.945 "abort": true, 00:38:59.945 "seek_hole": false, 00:38:59.945 "seek_data": false, 00:38:59.945 "copy": true, 00:38:59.945 "nvme_iov_md": false 00:38:59.945 }, 00:38:59.945 "memory_domains": [ 00:38:59.945 { 00:38:59.945 "dma_device_id": "system", 00:38:59.945 "dma_device_type": 1 00:38:59.945 }, 00:38:59.945 { 00:38:59.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:59.945 "dma_device_type": 2 00:38:59.945 } 00:38:59.945 ], 00:38:59.945 "driver_specific": {} 00:38:59.945 }' 00:38:59.945 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:59.945 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:00.204 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:39:00.204 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:00.204 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:00.204 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:39:00.204 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:00.204 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:00.204 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:39:00.204 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:00.462 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:00.462 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:39:00.462 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:00.462 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:39:00.462 01:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:00.721 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:00.721 "name": "BaseBdev2", 00:39:00.721 "aliases": [ 00:39:00.721 "0ef8d536-fdae-4582-93b7-f21a24774e2b" 00:39:00.721 ], 00:39:00.721 "product_name": "Malloc disk", 00:39:00.721 "block_size": 4128, 00:39:00.721 "num_blocks": 8192, 00:39:00.721 "uuid": "0ef8d536-fdae-4582-93b7-f21a24774e2b", 00:39:00.721 "md_size": 32, 00:39:00.721 "md_interleave": true, 00:39:00.721 "dif_type": 0, 00:39:00.721 "assigned_rate_limits": { 00:39:00.721 "rw_ios_per_sec": 0, 00:39:00.721 "rw_mbytes_per_sec": 0, 00:39:00.721 "r_mbytes_per_sec": 0, 00:39:00.721 "w_mbytes_per_sec": 0 00:39:00.721 }, 00:39:00.721 "claimed": true, 00:39:00.721 "claim_type": "exclusive_write", 00:39:00.721 "zoned": false, 00:39:00.721 "supported_io_types": { 00:39:00.721 "read": true, 00:39:00.721 "write": true, 00:39:00.721 "unmap": true, 00:39:00.721 "flush": true, 00:39:00.721 "reset": true, 00:39:00.721 "nvme_admin": false, 00:39:00.721 "nvme_io": false, 00:39:00.721 "nvme_io_md": false, 00:39:00.721 "write_zeroes": true, 00:39:00.721 "zcopy": true, 00:39:00.721 "get_zone_info": false, 00:39:00.721 "zone_management": false, 00:39:00.721 "zone_append": false, 00:39:00.721 "compare": false, 00:39:00.721 "compare_and_write": false, 00:39:00.721 "abort": true, 00:39:00.721 "seek_hole": false, 00:39:00.721 "seek_data": false, 00:39:00.721 "copy": true, 00:39:00.721 "nvme_iov_md": false 00:39:00.721 }, 00:39:00.721 "memory_domains": [ 00:39:00.721 { 00:39:00.721 "dma_device_id": "system", 00:39:00.721 "dma_device_type": 1 00:39:00.721 }, 00:39:00.721 { 00:39:00.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:00.721 "dma_device_type": 2 00:39:00.721 } 00:39:00.721 ], 00:39:00.721 "driver_specific": {} 00:39:00.721 }' 00:39:00.721 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:00.721 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:00.721 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:39:00.721 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:00.721 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:00.979 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:39:00.979 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:00.979 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:00.979 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:39:00.979 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:00.979 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:00.979 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:39:00.979 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:39:01.275 [2024-07-25 01:05:23.720527] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:01.275 01:05:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:01.534 01:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:01.534 "name": "Existed_Raid", 00:39:01.534 "uuid": "b0683d23-ed88-45be-9f71-1c51a5d5ca18", 00:39:01.534 "strip_size_kb": 0, 00:39:01.534 "state": "online", 00:39:01.534 "raid_level": "raid1", 00:39:01.534 "superblock": true, 00:39:01.534 "num_base_bdevs": 2, 00:39:01.534 "num_base_bdevs_discovered": 1, 00:39:01.534 "num_base_bdevs_operational": 1, 00:39:01.534 "base_bdevs_list": [ 00:39:01.534 { 00:39:01.534 "name": null, 00:39:01.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:01.534 "is_configured": false, 00:39:01.534 "data_offset": 256, 00:39:01.534 "data_size": 7936 00:39:01.534 }, 00:39:01.534 { 00:39:01.534 "name": "BaseBdev2", 00:39:01.534 "uuid": "0ef8d536-fdae-4582-93b7-f21a24774e2b", 00:39:01.534 "is_configured": true, 00:39:01.534 "data_offset": 256, 00:39:01.534 "data_size": 7936 00:39:01.534 } 00:39:01.534 ] 00:39:01.534 }' 00:39:01.534 01:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:01.534 01:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:02.102 01:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:39:02.102 01:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:39:02.102 01:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:02.102 01:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:39:02.360 01:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:39:02.360 01:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:02.360 01:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:39:02.618 [2024-07-25 01:05:25.041256] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:02.618 [2024-07-25 01:05:25.041359] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:02.618 [2024-07-25 01:05:25.141482] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:02.618 [2024-07-25 01:05:25.141531] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:02.618 [2024-07-25 01:05:25.141557] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:39:02.618 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:39:02.618 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:39:02.618 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:02.618 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:39:02.876 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:39:02.876 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 162636 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 162636 ']' 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 162636 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162636 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162636' 00:39:02.877 killing process with pid 162636 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 162636 00:39:02.877 [2024-07-25 01:05:25.447602] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:02.877 [2024-07-25 01:05:25.447726] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:02.877 01:05:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 162636 00:39:04.252 ************************************ 00:39:04.252 END TEST raid_state_function_test_sb_md_interleaved 00:39:04.252 ************************************ 00:39:04.252 01:05:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:39:04.252 00:39:04.252 real 0m11.002s 00:39:04.252 user 0m18.618s 00:39:04.252 sys 0m1.584s 00:39:04.252 01:05:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:04.253 01:05:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:04.253 01:05:26 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:39:04.253 01:05:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:39:04.253 01:05:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:04.253 01:05:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:04.253 ************************************ 00:39:04.253 START TEST raid_superblock_test_md_interleaved 00:39:04.253 ************************************ 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=163000 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 163000 /var/tmp/spdk-raid.sock 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 163000 ']' 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:04.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:04.253 01:05:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:04.511 [2024-07-25 01:05:26.964905] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:39:04.512 [2024-07-25 01:05:26.965120] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163000 ] 00:39:04.512 [2024-07-25 01:05:27.142457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.770 [2024-07-25 01:05:27.324926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.028 [2024-07-25 01:05:27.518215] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:05.287 01:05:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:05.287 01:05:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:39:05.287 01:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:39:05.287 01:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:39:05.287 01:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:39:05.287 01:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:39:05.287 01:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:39:05.287 01:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:05.287 01:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:39:05.287 01:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:05.287 01:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:39:05.546 malloc1 00:39:05.546 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:05.805 [2024-07-25 01:05:28.331873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:05.805 [2024-07-25 01:05:28.331983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:05.805 [2024-07-25 01:05:28.332024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:39:05.805 [2024-07-25 01:05:28.332049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:05.805 [2024-07-25 01:05:28.334129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:05.805 [2024-07-25 01:05:28.334182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:05.805 pt1 00:39:05.805 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:39:05.805 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:39:05.805 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:39:05.805 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:39:05.805 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:39:05.805 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:05.805 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:39:05.805 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:05.805 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:39:06.064 malloc2 00:39:06.064 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:06.323 [2024-07-25 01:05:28.767543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:06.323 [2024-07-25 01:05:28.767646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:06.323 [2024-07-25 01:05:28.767698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:39:06.323 [2024-07-25 01:05:28.767718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:06.323 [2024-07-25 01:05:28.769675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:06.323 [2024-07-25 01:05:28.769728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:06.323 pt2 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:39:06.323 [2024-07-25 01:05:28.947647] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:06.323 [2024-07-25 01:05:28.949639] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:06.323 [2024-07-25 01:05:28.949833] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:39:06.323 [2024-07-25 01:05:28.949844] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:06.323 [2024-07-25 01:05:28.949932] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:39:06.323 [2024-07-25 01:05:28.950014] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:39:06.323 [2024-07-25 01:05:28.950022] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:39:06.323 [2024-07-25 01:05:28.950070] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:06.323 01:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:06.587 01:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:06.587 "name": "raid_bdev1", 00:39:06.587 "uuid": "48b30a84-9639-48ba-93e3-b193db56bd8f", 00:39:06.587 "strip_size_kb": 0, 00:39:06.587 "state": "online", 00:39:06.587 "raid_level": "raid1", 00:39:06.587 "superblock": true, 00:39:06.587 "num_base_bdevs": 2, 00:39:06.587 "num_base_bdevs_discovered": 2, 00:39:06.587 "num_base_bdevs_operational": 2, 00:39:06.587 "base_bdevs_list": [ 00:39:06.587 { 00:39:06.587 "name": "pt1", 00:39:06.588 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:06.588 "is_configured": true, 00:39:06.588 "data_offset": 256, 00:39:06.588 "data_size": 7936 00:39:06.588 }, 00:39:06.588 { 00:39:06.588 "name": "pt2", 00:39:06.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:06.588 "is_configured": true, 00:39:06.588 "data_offset": 256, 00:39:06.588 "data_size": 7936 00:39:06.588 } 00:39:06.588 ] 00:39:06.588 }' 00:39:06.588 01:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:06.588 01:05:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:07.155 01:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:39:07.155 01:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:39:07.155 01:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:39:07.155 01:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:39:07.155 01:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:39:07.155 01:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:39:07.155 01:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:39:07.155 01:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:07.414 [2024-07-25 01:05:29.996037] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:07.414 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:39:07.414 "name": "raid_bdev1", 00:39:07.414 "aliases": [ 00:39:07.414 "48b30a84-9639-48ba-93e3-b193db56bd8f" 00:39:07.414 ], 00:39:07.414 "product_name": "Raid Volume", 00:39:07.414 "block_size": 4128, 00:39:07.414 "num_blocks": 7936, 00:39:07.414 "uuid": "48b30a84-9639-48ba-93e3-b193db56bd8f", 00:39:07.414 "md_size": 32, 00:39:07.414 "md_interleave": true, 00:39:07.414 "dif_type": 0, 00:39:07.414 "assigned_rate_limits": { 00:39:07.414 "rw_ios_per_sec": 0, 00:39:07.414 "rw_mbytes_per_sec": 0, 00:39:07.414 "r_mbytes_per_sec": 0, 00:39:07.414 "w_mbytes_per_sec": 0 00:39:07.414 }, 00:39:07.414 "claimed": false, 00:39:07.414 "zoned": false, 00:39:07.414 "supported_io_types": { 00:39:07.414 "read": true, 00:39:07.414 "write": true, 00:39:07.414 "unmap": false, 00:39:07.414 "flush": false, 00:39:07.414 "reset": true, 00:39:07.414 "nvme_admin": false, 00:39:07.414 "nvme_io": false, 00:39:07.414 "nvme_io_md": false, 00:39:07.414 "write_zeroes": true, 00:39:07.414 "zcopy": false, 00:39:07.414 "get_zone_info": false, 00:39:07.414 "zone_management": false, 00:39:07.414 "zone_append": false, 00:39:07.414 "compare": false, 00:39:07.414 "compare_and_write": false, 00:39:07.414 "abort": false, 00:39:07.414 "seek_hole": false, 00:39:07.414 "seek_data": false, 00:39:07.414 "copy": false, 00:39:07.414 "nvme_iov_md": false 00:39:07.414 }, 00:39:07.414 "memory_domains": [ 00:39:07.414 { 00:39:07.414 "dma_device_id": "system", 00:39:07.414 "dma_device_type": 1 00:39:07.414 }, 00:39:07.414 { 00:39:07.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:07.414 "dma_device_type": 2 00:39:07.414 }, 00:39:07.414 { 00:39:07.414 "dma_device_id": "system", 00:39:07.414 "dma_device_type": 1 00:39:07.414 }, 00:39:07.414 { 00:39:07.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:07.414 "dma_device_type": 2 00:39:07.414 } 00:39:07.414 ], 00:39:07.414 "driver_specific": { 00:39:07.414 "raid": { 00:39:07.414 "uuid": "48b30a84-9639-48ba-93e3-b193db56bd8f", 00:39:07.414 "strip_size_kb": 0, 00:39:07.414 "state": "online", 00:39:07.414 "raid_level": "raid1", 00:39:07.414 "superblock": true, 00:39:07.414 "num_base_bdevs": 2, 00:39:07.414 "num_base_bdevs_discovered": 2, 00:39:07.414 "num_base_bdevs_operational": 2, 00:39:07.414 "base_bdevs_list": [ 00:39:07.414 { 00:39:07.414 "name": "pt1", 00:39:07.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:07.414 "is_configured": true, 00:39:07.414 "data_offset": 256, 00:39:07.414 "data_size": 7936 00:39:07.414 }, 00:39:07.414 { 00:39:07.414 "name": "pt2", 00:39:07.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:07.414 "is_configured": true, 00:39:07.414 "data_offset": 256, 00:39:07.414 "data_size": 7936 00:39:07.414 } 00:39:07.414 ] 00:39:07.414 } 00:39:07.414 } 00:39:07.414 }' 00:39:07.414 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:07.414 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:39:07.414 pt2' 00:39:07.414 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:07.414 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:39:07.414 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:07.672 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:07.672 "name": "pt1", 00:39:07.672 "aliases": [ 00:39:07.672 "00000000-0000-0000-0000-000000000001" 00:39:07.672 ], 00:39:07.672 "product_name": "passthru", 00:39:07.672 "block_size": 4128, 00:39:07.672 "num_blocks": 8192, 00:39:07.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:07.672 "md_size": 32, 00:39:07.672 "md_interleave": true, 00:39:07.672 "dif_type": 0, 00:39:07.672 "assigned_rate_limits": { 00:39:07.672 "rw_ios_per_sec": 0, 00:39:07.672 "rw_mbytes_per_sec": 0, 00:39:07.673 "r_mbytes_per_sec": 0, 00:39:07.673 "w_mbytes_per_sec": 0 00:39:07.673 }, 00:39:07.673 "claimed": true, 00:39:07.673 "claim_type": "exclusive_write", 00:39:07.673 "zoned": false, 00:39:07.673 "supported_io_types": { 00:39:07.673 "read": true, 00:39:07.673 "write": true, 00:39:07.673 "unmap": true, 00:39:07.673 "flush": true, 00:39:07.673 "reset": true, 00:39:07.673 "nvme_admin": false, 00:39:07.673 "nvme_io": false, 00:39:07.673 "nvme_io_md": false, 00:39:07.673 "write_zeroes": true, 00:39:07.673 "zcopy": true, 00:39:07.673 "get_zone_info": false, 00:39:07.673 "zone_management": false, 00:39:07.673 "zone_append": false, 00:39:07.673 "compare": false, 00:39:07.673 "compare_and_write": false, 00:39:07.673 "abort": true, 00:39:07.673 "seek_hole": false, 00:39:07.673 "seek_data": false, 00:39:07.673 "copy": true, 00:39:07.673 "nvme_iov_md": false 00:39:07.673 }, 00:39:07.673 "memory_domains": [ 00:39:07.673 { 00:39:07.673 "dma_device_id": "system", 00:39:07.673 "dma_device_type": 1 00:39:07.673 }, 00:39:07.673 { 00:39:07.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:07.673 "dma_device_type": 2 00:39:07.673 } 00:39:07.673 ], 00:39:07.673 "driver_specific": { 00:39:07.673 "passthru": { 00:39:07.673 "name": "pt1", 00:39:07.673 "base_bdev_name": "malloc1" 00:39:07.673 } 00:39:07.673 } 00:39:07.673 }' 00:39:07.673 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:07.931 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:39:08.190 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:08.190 "name": "pt2", 00:39:08.190 "aliases": [ 00:39:08.190 "00000000-0000-0000-0000-000000000002" 00:39:08.190 ], 00:39:08.190 "product_name": "passthru", 00:39:08.190 "block_size": 4128, 00:39:08.190 "num_blocks": 8192, 00:39:08.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:08.190 "md_size": 32, 00:39:08.190 "md_interleave": true, 00:39:08.190 "dif_type": 0, 00:39:08.190 "assigned_rate_limits": { 00:39:08.190 "rw_ios_per_sec": 0, 00:39:08.190 "rw_mbytes_per_sec": 0, 00:39:08.190 "r_mbytes_per_sec": 0, 00:39:08.190 "w_mbytes_per_sec": 0 00:39:08.190 }, 00:39:08.190 "claimed": true, 00:39:08.190 "claim_type": "exclusive_write", 00:39:08.190 "zoned": false, 00:39:08.190 "supported_io_types": { 00:39:08.190 "read": true, 00:39:08.190 "write": true, 00:39:08.190 "unmap": true, 00:39:08.190 "flush": true, 00:39:08.190 "reset": true, 00:39:08.190 "nvme_admin": false, 00:39:08.190 "nvme_io": false, 00:39:08.190 "nvme_io_md": false, 00:39:08.190 "write_zeroes": true, 00:39:08.190 "zcopy": true, 00:39:08.190 "get_zone_info": false, 00:39:08.190 "zone_management": false, 00:39:08.190 "zone_append": false, 00:39:08.190 "compare": false, 00:39:08.190 "compare_and_write": false, 00:39:08.190 "abort": true, 00:39:08.190 "seek_hole": false, 00:39:08.190 "seek_data": false, 00:39:08.190 "copy": true, 00:39:08.190 "nvme_iov_md": false 00:39:08.190 }, 00:39:08.190 "memory_domains": [ 00:39:08.190 { 00:39:08.190 "dma_device_id": "system", 00:39:08.190 "dma_device_type": 1 00:39:08.190 }, 00:39:08.190 { 00:39:08.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:08.190 "dma_device_type": 2 00:39:08.190 } 00:39:08.190 ], 00:39:08.190 "driver_specific": { 00:39:08.190 "passthru": { 00:39:08.190 "name": "pt2", 00:39:08.190 "base_bdev_name": "malloc2" 00:39:08.190 } 00:39:08.190 } 00:39:08.190 }' 00:39:08.190 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:08.190 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:08.190 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:39:08.190 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:08.448 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:08.448 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:39:08.448 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:08.448 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:08.448 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:39:08.448 01:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:08.448 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:08.448 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:39:08.448 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:08.448 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:39:08.706 [2024-07-25 01:05:31.236637] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:08.706 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=48b30a84-9639-48ba-93e3-b193db56bd8f 00:39:08.706 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 48b30a84-9639-48ba-93e3-b193db56bd8f ']' 00:39:08.706 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:08.964 [2024-07-25 01:05:31.496381] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:08.964 [2024-07-25 01:05:31.496406] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:08.964 [2024-07-25 01:05:31.496498] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:08.964 [2024-07-25 01:05:31.496556] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:08.964 [2024-07-25 01:05:31.496564] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:39:08.964 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:08.964 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:39:09.223 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:39:09.223 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:39:09.223 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:39:09.223 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:39:09.482 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:39:09.482 01:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:09.482 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:39:09.482 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:09.740 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:09.999 [2024-07-25 01:05:32.580565] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:39:09.999 [2024-07-25 01:05:32.582526] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:39:10.000 [2024-07-25 01:05:32.582611] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:39:10.000 [2024-07-25 01:05:32.582690] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:39:10.000 [2024-07-25 01:05:32.582717] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:10.000 [2024-07-25 01:05:32.582726] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:39:10.000 request: 00:39:10.000 { 00:39:10.000 "name": "raid_bdev1", 00:39:10.000 "raid_level": "raid1", 00:39:10.000 "base_bdevs": [ 00:39:10.000 "malloc1", 00:39:10.000 "malloc2" 00:39:10.000 ], 00:39:10.000 "superblock": false, 00:39:10.000 "method": "bdev_raid_create", 00:39:10.000 "req_id": 1 00:39:10.000 } 00:39:10.000 Got JSON-RPC error response 00:39:10.000 response: 00:39:10.000 { 00:39:10.000 "code": -17, 00:39:10.000 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:39:10.000 } 00:39:10.000 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:39:10.000 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:10.000 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:10.000 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:10.000 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:10.000 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:39:10.298 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:39:10.298 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:39:10.298 01:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:10.574 [2024-07-25 01:05:33.012622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:10.574 [2024-07-25 01:05:33.012723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:10.574 [2024-07-25 01:05:33.012753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:10.574 [2024-07-25 01:05:33.012777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:10.574 [2024-07-25 01:05:33.014755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:10.574 [2024-07-25 01:05:33.014838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:10.574 [2024-07-25 01:05:33.014901] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:10.574 [2024-07-25 01:05:33.014964] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:10.574 pt1 00:39:10.574 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:39:10.574 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:10.574 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:39:10.575 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:10.575 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:10.575 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:10.575 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:10.575 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:10.575 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:10.575 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:10.575 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:10.575 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:10.834 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:10.834 "name": "raid_bdev1", 00:39:10.834 "uuid": "48b30a84-9639-48ba-93e3-b193db56bd8f", 00:39:10.834 "strip_size_kb": 0, 00:39:10.834 "state": "configuring", 00:39:10.834 "raid_level": "raid1", 00:39:10.834 "superblock": true, 00:39:10.834 "num_base_bdevs": 2, 00:39:10.834 "num_base_bdevs_discovered": 1, 00:39:10.834 "num_base_bdevs_operational": 2, 00:39:10.834 "base_bdevs_list": [ 00:39:10.834 { 00:39:10.834 "name": "pt1", 00:39:10.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:10.834 "is_configured": true, 00:39:10.834 "data_offset": 256, 00:39:10.834 "data_size": 7936 00:39:10.834 }, 00:39:10.834 { 00:39:10.834 "name": null, 00:39:10.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:10.834 "is_configured": false, 00:39:10.834 "data_offset": 256, 00:39:10.834 "data_size": 7936 00:39:10.834 } 00:39:10.834 ] 00:39:10.834 }' 00:39:10.834 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:10.834 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:11.401 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:39:11.401 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:39:11.401 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:39:11.401 01:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:11.401 [2024-07-25 01:05:33.992811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:11.401 [2024-07-25 01:05:33.992918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:11.401 [2024-07-25 01:05:33.992950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:39:11.401 [2024-07-25 01:05:33.992975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:11.401 [2024-07-25 01:05:33.993134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:11.401 [2024-07-25 01:05:33.993183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:11.401 [2024-07-25 01:05:33.993244] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:11.401 [2024-07-25 01:05:33.993264] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:11.401 [2024-07-25 01:05:33.993350] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:39:11.401 [2024-07-25 01:05:33.993358] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:11.401 [2024-07-25 01:05:33.993424] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:39:11.401 [2024-07-25 01:05:33.993499] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:39:11.401 [2024-07-25 01:05:33.993508] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:39:11.401 [2024-07-25 01:05:33.993559] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:11.401 pt2 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:11.401 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:11.660 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:11.660 "name": "raid_bdev1", 00:39:11.660 "uuid": "48b30a84-9639-48ba-93e3-b193db56bd8f", 00:39:11.660 "strip_size_kb": 0, 00:39:11.660 "state": "online", 00:39:11.660 "raid_level": "raid1", 00:39:11.660 "superblock": true, 00:39:11.660 "num_base_bdevs": 2, 00:39:11.660 "num_base_bdevs_discovered": 2, 00:39:11.660 "num_base_bdevs_operational": 2, 00:39:11.660 "base_bdevs_list": [ 00:39:11.660 { 00:39:11.660 "name": "pt1", 00:39:11.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:11.660 "is_configured": true, 00:39:11.660 "data_offset": 256, 00:39:11.660 "data_size": 7936 00:39:11.660 }, 00:39:11.660 { 00:39:11.660 "name": "pt2", 00:39:11.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:11.660 "is_configured": true, 00:39:11.660 "data_offset": 256, 00:39:11.660 "data_size": 7936 00:39:11.660 } 00:39:11.660 ] 00:39:11.660 }' 00:39:11.660 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:11.660 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:12.227 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:39:12.227 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:39:12.227 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:39:12.227 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:39:12.227 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:39:12.227 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:39:12.227 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:12.227 01:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:39:12.486 [2024-07-25 01:05:35.045227] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:12.486 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:39:12.486 "name": "raid_bdev1", 00:39:12.486 "aliases": [ 00:39:12.486 "48b30a84-9639-48ba-93e3-b193db56bd8f" 00:39:12.486 ], 00:39:12.486 "product_name": "Raid Volume", 00:39:12.486 "block_size": 4128, 00:39:12.486 "num_blocks": 7936, 00:39:12.486 "uuid": "48b30a84-9639-48ba-93e3-b193db56bd8f", 00:39:12.486 "md_size": 32, 00:39:12.486 "md_interleave": true, 00:39:12.486 "dif_type": 0, 00:39:12.486 "assigned_rate_limits": { 00:39:12.486 "rw_ios_per_sec": 0, 00:39:12.486 "rw_mbytes_per_sec": 0, 00:39:12.486 "r_mbytes_per_sec": 0, 00:39:12.486 "w_mbytes_per_sec": 0 00:39:12.486 }, 00:39:12.486 "claimed": false, 00:39:12.486 "zoned": false, 00:39:12.486 "supported_io_types": { 00:39:12.486 "read": true, 00:39:12.486 "write": true, 00:39:12.486 "unmap": false, 00:39:12.486 "flush": false, 00:39:12.486 "reset": true, 00:39:12.486 "nvme_admin": false, 00:39:12.486 "nvme_io": false, 00:39:12.486 "nvme_io_md": false, 00:39:12.486 "write_zeroes": true, 00:39:12.486 "zcopy": false, 00:39:12.486 "get_zone_info": false, 00:39:12.486 "zone_management": false, 00:39:12.486 "zone_append": false, 00:39:12.486 "compare": false, 00:39:12.486 "compare_and_write": false, 00:39:12.486 "abort": false, 00:39:12.486 "seek_hole": false, 00:39:12.486 "seek_data": false, 00:39:12.486 "copy": false, 00:39:12.486 "nvme_iov_md": false 00:39:12.486 }, 00:39:12.486 "memory_domains": [ 00:39:12.486 { 00:39:12.486 "dma_device_id": "system", 00:39:12.486 "dma_device_type": 1 00:39:12.486 }, 00:39:12.486 { 00:39:12.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:12.486 "dma_device_type": 2 00:39:12.486 }, 00:39:12.486 { 00:39:12.486 "dma_device_id": "system", 00:39:12.486 "dma_device_type": 1 00:39:12.486 }, 00:39:12.486 { 00:39:12.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:12.486 "dma_device_type": 2 00:39:12.486 } 00:39:12.486 ], 00:39:12.486 "driver_specific": { 00:39:12.486 "raid": { 00:39:12.486 "uuid": "48b30a84-9639-48ba-93e3-b193db56bd8f", 00:39:12.486 "strip_size_kb": 0, 00:39:12.486 "state": "online", 00:39:12.486 "raid_level": "raid1", 00:39:12.486 "superblock": true, 00:39:12.486 "num_base_bdevs": 2, 00:39:12.486 "num_base_bdevs_discovered": 2, 00:39:12.486 "num_base_bdevs_operational": 2, 00:39:12.486 "base_bdevs_list": [ 00:39:12.486 { 00:39:12.486 "name": "pt1", 00:39:12.487 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:12.487 "is_configured": true, 00:39:12.487 "data_offset": 256, 00:39:12.487 "data_size": 7936 00:39:12.487 }, 00:39:12.487 { 00:39:12.487 "name": "pt2", 00:39:12.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:12.487 "is_configured": true, 00:39:12.487 "data_offset": 256, 00:39:12.487 "data_size": 7936 00:39:12.487 } 00:39:12.487 ] 00:39:12.487 } 00:39:12.487 } 00:39:12.487 }' 00:39:12.487 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:12.487 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:39:12.487 pt2' 00:39:12.487 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:12.487 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:39:12.487 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:12.746 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:12.746 "name": "pt1", 00:39:12.746 "aliases": [ 00:39:12.746 "00000000-0000-0000-0000-000000000001" 00:39:12.746 ], 00:39:12.746 "product_name": "passthru", 00:39:12.746 "block_size": 4128, 00:39:12.746 "num_blocks": 8192, 00:39:12.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:12.746 "md_size": 32, 00:39:12.746 "md_interleave": true, 00:39:12.746 "dif_type": 0, 00:39:12.746 "assigned_rate_limits": { 00:39:12.746 "rw_ios_per_sec": 0, 00:39:12.746 "rw_mbytes_per_sec": 0, 00:39:12.746 "r_mbytes_per_sec": 0, 00:39:12.746 "w_mbytes_per_sec": 0 00:39:12.746 }, 00:39:12.746 "claimed": true, 00:39:12.746 "claim_type": "exclusive_write", 00:39:12.746 "zoned": false, 00:39:12.746 "supported_io_types": { 00:39:12.746 "read": true, 00:39:12.746 "write": true, 00:39:12.746 "unmap": true, 00:39:12.746 "flush": true, 00:39:12.746 "reset": true, 00:39:12.746 "nvme_admin": false, 00:39:12.746 "nvme_io": false, 00:39:12.746 "nvme_io_md": false, 00:39:12.746 "write_zeroes": true, 00:39:12.746 "zcopy": true, 00:39:12.746 "get_zone_info": false, 00:39:12.746 "zone_management": false, 00:39:12.746 "zone_append": false, 00:39:12.746 "compare": false, 00:39:12.746 "compare_and_write": false, 00:39:12.746 "abort": true, 00:39:12.746 "seek_hole": false, 00:39:12.746 "seek_data": false, 00:39:12.746 "copy": true, 00:39:12.746 "nvme_iov_md": false 00:39:12.746 }, 00:39:12.746 "memory_domains": [ 00:39:12.746 { 00:39:12.746 "dma_device_id": "system", 00:39:12.746 "dma_device_type": 1 00:39:12.746 }, 00:39:12.746 { 00:39:12.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:12.746 "dma_device_type": 2 00:39:12.746 } 00:39:12.746 ], 00:39:12.746 "driver_specific": { 00:39:12.746 "passthru": { 00:39:12.746 "name": "pt1", 00:39:12.746 "base_bdev_name": "malloc1" 00:39:12.746 } 00:39:12.746 } 00:39:12.746 }' 00:39:12.746 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:12.746 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:12.746 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:39:12.746 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:13.005 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:13.005 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:39:13.005 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:13.005 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:13.005 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:39:13.005 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:13.005 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:13.005 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:39:13.005 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:13.005 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:39:13.005 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:13.264 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:13.264 "name": "pt2", 00:39:13.264 "aliases": [ 00:39:13.264 "00000000-0000-0000-0000-000000000002" 00:39:13.264 ], 00:39:13.264 "product_name": "passthru", 00:39:13.264 "block_size": 4128, 00:39:13.264 "num_blocks": 8192, 00:39:13.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:13.264 "md_size": 32, 00:39:13.264 "md_interleave": true, 00:39:13.264 "dif_type": 0, 00:39:13.264 "assigned_rate_limits": { 00:39:13.264 "rw_ios_per_sec": 0, 00:39:13.264 "rw_mbytes_per_sec": 0, 00:39:13.264 "r_mbytes_per_sec": 0, 00:39:13.264 "w_mbytes_per_sec": 0 00:39:13.264 }, 00:39:13.264 "claimed": true, 00:39:13.264 "claim_type": "exclusive_write", 00:39:13.264 "zoned": false, 00:39:13.265 "supported_io_types": { 00:39:13.265 "read": true, 00:39:13.265 "write": true, 00:39:13.265 "unmap": true, 00:39:13.265 "flush": true, 00:39:13.265 "reset": true, 00:39:13.265 "nvme_admin": false, 00:39:13.265 "nvme_io": false, 00:39:13.265 "nvme_io_md": false, 00:39:13.265 "write_zeroes": true, 00:39:13.265 "zcopy": true, 00:39:13.265 "get_zone_info": false, 00:39:13.265 "zone_management": false, 00:39:13.265 "zone_append": false, 00:39:13.265 "compare": false, 00:39:13.265 "compare_and_write": false, 00:39:13.265 "abort": true, 00:39:13.265 "seek_hole": false, 00:39:13.265 "seek_data": false, 00:39:13.265 "copy": true, 00:39:13.265 "nvme_iov_md": false 00:39:13.265 }, 00:39:13.265 "memory_domains": [ 00:39:13.265 { 00:39:13.265 "dma_device_id": "system", 00:39:13.265 "dma_device_type": 1 00:39:13.265 }, 00:39:13.265 { 00:39:13.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:13.265 "dma_device_type": 2 00:39:13.265 } 00:39:13.265 ], 00:39:13.265 "driver_specific": { 00:39:13.265 "passthru": { 00:39:13.265 "name": "pt2", 00:39:13.265 "base_bdev_name": "malloc2" 00:39:13.265 } 00:39:13.265 } 00:39:13.265 }' 00:39:13.265 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:13.265 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:13.524 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:39:13.524 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:13.524 01:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:13.524 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:39:13.524 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:13.524 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:13.524 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:39:13.524 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:13.524 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:13.524 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:39:13.524 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:13.524 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:39:13.783 [2024-07-25 01:05:36.405529] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:13.783 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 48b30a84-9639-48ba-93e3-b193db56bd8f '!=' 48b30a84-9639-48ba-93e3-b193db56bd8f ']' 00:39:13.783 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:39:13.783 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:39:13.783 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:39:13.783 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:39:14.043 [2024-07-25 01:05:36.585263] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:14.043 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:14.302 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:14.302 "name": "raid_bdev1", 00:39:14.302 "uuid": "48b30a84-9639-48ba-93e3-b193db56bd8f", 00:39:14.302 "strip_size_kb": 0, 00:39:14.302 "state": "online", 00:39:14.302 "raid_level": "raid1", 00:39:14.302 "superblock": true, 00:39:14.302 "num_base_bdevs": 2, 00:39:14.302 "num_base_bdevs_discovered": 1, 00:39:14.302 "num_base_bdevs_operational": 1, 00:39:14.302 "base_bdevs_list": [ 00:39:14.302 { 00:39:14.302 "name": null, 00:39:14.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:14.302 "is_configured": false, 00:39:14.302 "data_offset": 256, 00:39:14.302 "data_size": 7936 00:39:14.302 }, 00:39:14.302 { 00:39:14.302 "name": "pt2", 00:39:14.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:14.302 "is_configured": true, 00:39:14.302 "data_offset": 256, 00:39:14.302 "data_size": 7936 00:39:14.302 } 00:39:14.302 ] 00:39:14.302 }' 00:39:14.302 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:14.302 01:05:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:14.870 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:15.128 [2024-07-25 01:05:37.525385] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:15.128 [2024-07-25 01:05:37.525419] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:15.128 [2024-07-25 01:05:37.525482] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:15.128 [2024-07-25 01:05:37.525528] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:15.128 [2024-07-25 01:05:37.525537] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:39:15.128 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:15.129 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:39:15.129 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:39:15.129 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:39:15.129 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:39:15.129 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:39:15.129 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:15.387 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:39:15.387 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:39:15.387 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:39:15.387 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:39:15.387 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:39:15.387 01:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:15.646 [2024-07-25 01:05:38.073451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:15.646 [2024-07-25 01:05:38.073536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:15.646 [2024-07-25 01:05:38.073563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:39:15.646 [2024-07-25 01:05:38.073589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:15.646 [2024-07-25 01:05:38.075581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:15.646 [2024-07-25 01:05:38.075648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:15.646 [2024-07-25 01:05:38.075734] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:15.646 [2024-07-25 01:05:38.075779] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:15.646 [2024-07-25 01:05:38.075852] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:39:15.646 [2024-07-25 01:05:38.075861] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:15.646 [2024-07-25 01:05:38.075921] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:15.646 [2024-07-25 01:05:38.075993] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:39:15.646 [2024-07-25 01:05:38.076002] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:39:15.646 [2024-07-25 01:05:38.076048] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:15.646 pt2 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:15.646 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:15.905 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:15.905 "name": "raid_bdev1", 00:39:15.905 "uuid": "48b30a84-9639-48ba-93e3-b193db56bd8f", 00:39:15.905 "strip_size_kb": 0, 00:39:15.905 "state": "online", 00:39:15.905 "raid_level": "raid1", 00:39:15.905 "superblock": true, 00:39:15.905 "num_base_bdevs": 2, 00:39:15.905 "num_base_bdevs_discovered": 1, 00:39:15.905 "num_base_bdevs_operational": 1, 00:39:15.905 "base_bdevs_list": [ 00:39:15.905 { 00:39:15.905 "name": null, 00:39:15.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:15.905 "is_configured": false, 00:39:15.905 "data_offset": 256, 00:39:15.905 "data_size": 7936 00:39:15.905 }, 00:39:15.905 { 00:39:15.905 "name": "pt2", 00:39:15.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:15.905 "is_configured": true, 00:39:15.905 "data_offset": 256, 00:39:15.905 "data_size": 7936 00:39:15.905 } 00:39:15.905 ] 00:39:15.905 }' 00:39:15.905 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:15.905 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:16.473 01:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:16.473 [2024-07-25 01:05:39.097642] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:16.473 [2024-07-25 01:05:39.097673] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:16.473 [2024-07-25 01:05:39.097736] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:16.473 [2024-07-25 01:05:39.097778] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:16.473 [2024-07-25 01:05:39.097787] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:39:16.473 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:16.473 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:39:16.732 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:39:16.732 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:39:16.732 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:39:16.732 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:16.991 [2024-07-25 01:05:39.537729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:16.991 [2024-07-25 01:05:39.537812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:16.991 [2024-07-25 01:05:39.537865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:39:16.991 [2024-07-25 01:05:39.537887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:16.991 [2024-07-25 01:05:39.539857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:16.991 [2024-07-25 01:05:39.539936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:16.991 [2024-07-25 01:05:39.540009] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:16.991 [2024-07-25 01:05:39.540063] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:16.992 [2024-07-25 01:05:39.540149] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:39:16.992 [2024-07-25 01:05:39.540158] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:16.992 [2024-07-25 01:05:39.540177] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:39:16.992 [2024-07-25 01:05:39.540233] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:16.992 [2024-07-25 01:05:39.540304] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:39:16.992 [2024-07-25 01:05:39.540313] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:16.992 [2024-07-25 01:05:39.540371] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:16.992 [2024-07-25 01:05:39.540424] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:39:16.992 [2024-07-25 01:05:39.540432] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:39:16.992 [2024-07-25 01:05:39.540479] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:16.992 pt1 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:16.992 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:17.251 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:17.251 "name": "raid_bdev1", 00:39:17.251 "uuid": "48b30a84-9639-48ba-93e3-b193db56bd8f", 00:39:17.251 "strip_size_kb": 0, 00:39:17.251 "state": "online", 00:39:17.251 "raid_level": "raid1", 00:39:17.251 "superblock": true, 00:39:17.251 "num_base_bdevs": 2, 00:39:17.251 "num_base_bdevs_discovered": 1, 00:39:17.251 "num_base_bdevs_operational": 1, 00:39:17.251 "base_bdevs_list": [ 00:39:17.251 { 00:39:17.251 "name": null, 00:39:17.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:17.251 "is_configured": false, 00:39:17.251 "data_offset": 256, 00:39:17.251 "data_size": 7936 00:39:17.251 }, 00:39:17.251 { 00:39:17.251 "name": "pt2", 00:39:17.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:17.251 "is_configured": true, 00:39:17.251 "data_offset": 256, 00:39:17.251 "data_size": 7936 00:39:17.251 } 00:39:17.251 ] 00:39:17.251 }' 00:39:17.251 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:17.251 01:05:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:17.829 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:39:17.829 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:39:18.109 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:39:18.109 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:18.109 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:39:18.368 [2024-07-25 01:05:40.790156] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 48b30a84-9639-48ba-93e3-b193db56bd8f '!=' 48b30a84-9639-48ba-93e3-b193db56bd8f ']' 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 163000 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 163000 ']' 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 163000 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 163000 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:18.368 killing process with pid 163000 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 163000' 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 163000 00:39:18.368 [2024-07-25 01:05:40.836547] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:18.368 [2024-07-25 01:05:40.836607] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:18.368 [2024-07-25 01:05:40.836650] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:18.368 [2024-07-25 01:05:40.836658] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:39:18.368 01:05:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 163000 00:39:18.628 [2024-07-25 01:05:41.045531] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:20.007 01:05:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:39:20.007 00:39:20.007 real 0m15.514s 00:39:20.007 user 0m27.201s 00:39:20.007 sys 0m2.422s 00:39:20.007 01:05:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:20.007 ************************************ 00:39:20.007 END TEST raid_superblock_test_md_interleaved 00:39:20.007 ************************************ 00:39:20.007 01:05:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.007 01:05:42 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:39:20.007 01:05:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:39:20.007 01:05:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:20.007 01:05:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:20.007 ************************************ 00:39:20.007 START TEST raid_rebuild_test_sb_md_interleaved 00:39:20.007 ************************************ 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=163516 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 163516 /var/tmp/spdk-raid.sock 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 163516 ']' 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:20.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:20.007 01:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.007 [2024-07-25 01:05:42.534671] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:39:20.007 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:20.007 Zero copy mechanism will not be used. 00:39:20.007 [2024-07-25 01:05:42.534838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163516 ] 00:39:20.266 [2024-07-25 01:05:42.692450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.266 [2024-07-25 01:05:42.885101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.525 [2024-07-25 01:05:43.084174] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:20.785 01:05:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:20.785 01:05:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:39:20.785 01:05:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:39:20.785 01:05:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:39:21.043 BaseBdev1_malloc 00:39:21.043 01:05:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:21.301 [2024-07-25 01:05:43.865147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:21.301 [2024-07-25 01:05:43.865248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:21.301 [2024-07-25 01:05:43.865304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:39:21.301 [2024-07-25 01:05:43.865324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:21.301 [2024-07-25 01:05:43.867350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:21.301 [2024-07-25 01:05:43.867401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:21.301 BaseBdev1 00:39:21.301 01:05:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:39:21.301 01:05:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:39:21.560 BaseBdev2_malloc 00:39:21.560 01:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:21.818 [2024-07-25 01:05:44.278427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:21.818 [2024-07-25 01:05:44.278534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:21.818 [2024-07-25 01:05:44.278589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:39:21.818 [2024-07-25 01:05:44.278608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:21.818 [2024-07-25 01:05:44.280570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:21.818 [2024-07-25 01:05:44.280617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:21.818 BaseBdev2 00:39:21.818 01:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:39:22.077 spare_malloc 00:39:22.077 01:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:22.077 spare_delay 00:39:22.077 01:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:22.335 [2024-07-25 01:05:44.849990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:22.335 [2024-07-25 01:05:44.850084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:22.335 [2024-07-25 01:05:44.850135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:22.335 [2024-07-25 01:05:44.850160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:22.335 [2024-07-25 01:05:44.852110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:22.335 [2024-07-25 01:05:44.852175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:22.335 spare 00:39:22.335 01:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:39:22.592 [2024-07-25 01:05:45.030069] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:22.592 [2024-07-25 01:05:45.032029] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:22.593 [2024-07-25 01:05:45.032274] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:39:22.593 [2024-07-25 01:05:45.032293] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:22.593 [2024-07-25 01:05:45.032381] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:39:22.593 [2024-07-25 01:05:45.032443] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:39:22.593 [2024-07-25 01:05:45.032451] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:39:22.593 [2024-07-25 01:05:45.032507] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:22.593 "name": "raid_bdev1", 00:39:22.593 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:22.593 "strip_size_kb": 0, 00:39:22.593 "state": "online", 00:39:22.593 "raid_level": "raid1", 00:39:22.593 "superblock": true, 00:39:22.593 "num_base_bdevs": 2, 00:39:22.593 "num_base_bdevs_discovered": 2, 00:39:22.593 "num_base_bdevs_operational": 2, 00:39:22.593 "base_bdevs_list": [ 00:39:22.593 { 00:39:22.593 "name": "BaseBdev1", 00:39:22.593 "uuid": "7b099c7a-6e3a-598f-911e-c1ebdc6ea2d7", 00:39:22.593 "is_configured": true, 00:39:22.593 "data_offset": 256, 00:39:22.593 "data_size": 7936 00:39:22.593 }, 00:39:22.593 { 00:39:22.593 "name": "BaseBdev2", 00:39:22.593 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:22.593 "is_configured": true, 00:39:22.593 "data_offset": 256, 00:39:22.593 "data_size": 7936 00:39:22.593 } 00:39:22.593 ] 00:39:22.593 }' 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:22.593 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:23.158 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:39:23.158 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:23.417 [2024-07-25 01:05:45.950446] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:23.417 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:39:23.417 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:23.417 01:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:23.675 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:39:23.675 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:39:23.675 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:39:23.675 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:39:23.934 [2024-07-25 01:05:46.346240] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:23.934 "name": "raid_bdev1", 00:39:23.934 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:23.934 "strip_size_kb": 0, 00:39:23.934 "state": "online", 00:39:23.934 "raid_level": "raid1", 00:39:23.934 "superblock": true, 00:39:23.934 "num_base_bdevs": 2, 00:39:23.934 "num_base_bdevs_discovered": 1, 00:39:23.934 "num_base_bdevs_operational": 1, 00:39:23.934 "base_bdevs_list": [ 00:39:23.934 { 00:39:23.934 "name": null, 00:39:23.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:23.934 "is_configured": false, 00:39:23.934 "data_offset": 256, 00:39:23.934 "data_size": 7936 00:39:23.934 }, 00:39:23.934 { 00:39:23.934 "name": "BaseBdev2", 00:39:23.934 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:23.934 "is_configured": true, 00:39:23.934 "data_offset": 256, 00:39:23.934 "data_size": 7936 00:39:23.934 } 00:39:23.934 ] 00:39:23.934 }' 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:23.934 01:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:24.502 01:05:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:24.760 [2024-07-25 01:05:47.366510] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:24.760 [2024-07-25 01:05:47.381569] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:39:24.760 [2024-07-25 01:05:47.383476] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:24.760 01:05:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:39:26.136 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:26.136 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:26.136 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:26.136 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:26.136 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:26.136 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:26.136 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:26.136 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:26.136 "name": "raid_bdev1", 00:39:26.136 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:26.136 "strip_size_kb": 0, 00:39:26.137 "state": "online", 00:39:26.137 "raid_level": "raid1", 00:39:26.137 "superblock": true, 00:39:26.137 "num_base_bdevs": 2, 00:39:26.137 "num_base_bdevs_discovered": 2, 00:39:26.137 "num_base_bdevs_operational": 2, 00:39:26.137 "process": { 00:39:26.137 "type": "rebuild", 00:39:26.137 "target": "spare", 00:39:26.137 "progress": { 00:39:26.137 "blocks": 3072, 00:39:26.137 "percent": 38 00:39:26.137 } 00:39:26.137 }, 00:39:26.137 "base_bdevs_list": [ 00:39:26.137 { 00:39:26.137 "name": "spare", 00:39:26.137 "uuid": "be27ea4d-43e1-5c9c-84f3-6114cc3f6a13", 00:39:26.137 "is_configured": true, 00:39:26.137 "data_offset": 256, 00:39:26.137 "data_size": 7936 00:39:26.137 }, 00:39:26.137 { 00:39:26.137 "name": "BaseBdev2", 00:39:26.137 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:26.137 "is_configured": true, 00:39:26.137 "data_offset": 256, 00:39:26.137 "data_size": 7936 00:39:26.137 } 00:39:26.137 ] 00:39:26.137 }' 00:39:26.137 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:26.137 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:26.137 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:26.137 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:26.137 01:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:39:26.434 [2024-07-25 01:05:48.965123] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:26.434 [2024-07-25 01:05:48.992752] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:26.434 [2024-07-25 01:05:48.992831] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:26.434 [2024-07-25 01:05:48.992845] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:26.434 [2024-07-25 01:05:48.992852] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:26.434 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:26.722 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:26.722 "name": "raid_bdev1", 00:39:26.722 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:26.722 "strip_size_kb": 0, 00:39:26.722 "state": "online", 00:39:26.722 "raid_level": "raid1", 00:39:26.722 "superblock": true, 00:39:26.722 "num_base_bdevs": 2, 00:39:26.722 "num_base_bdevs_discovered": 1, 00:39:26.722 "num_base_bdevs_operational": 1, 00:39:26.722 "base_bdevs_list": [ 00:39:26.722 { 00:39:26.722 "name": null, 00:39:26.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.722 "is_configured": false, 00:39:26.722 "data_offset": 256, 00:39:26.722 "data_size": 7936 00:39:26.722 }, 00:39:26.722 { 00:39:26.722 "name": "BaseBdev2", 00:39:26.722 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:26.722 "is_configured": true, 00:39:26.722 "data_offset": 256, 00:39:26.722 "data_size": 7936 00:39:26.722 } 00:39:26.722 ] 00:39:26.722 }' 00:39:26.722 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:26.722 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:27.288 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:27.288 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:27.288 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:27.288 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:27.288 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:27.288 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:27.288 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:27.288 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:27.288 "name": "raid_bdev1", 00:39:27.288 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:27.288 "strip_size_kb": 0, 00:39:27.288 "state": "online", 00:39:27.288 "raid_level": "raid1", 00:39:27.288 "superblock": true, 00:39:27.288 "num_base_bdevs": 2, 00:39:27.288 "num_base_bdevs_discovered": 1, 00:39:27.288 "num_base_bdevs_operational": 1, 00:39:27.288 "base_bdevs_list": [ 00:39:27.288 { 00:39:27.288 "name": null, 00:39:27.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:27.288 "is_configured": false, 00:39:27.288 "data_offset": 256, 00:39:27.288 "data_size": 7936 00:39:27.288 }, 00:39:27.288 { 00:39:27.288 "name": "BaseBdev2", 00:39:27.288 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:27.288 "is_configured": true, 00:39:27.288 "data_offset": 256, 00:39:27.288 "data_size": 7936 00:39:27.288 } 00:39:27.288 ] 00:39:27.288 }' 00:39:27.288 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:27.546 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:27.546 01:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:27.546 01:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:27.546 01:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:27.803 [2024-07-25 01:05:50.261351] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:27.803 [2024-07-25 01:05:50.274796] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:27.803 [2024-07-25 01:05:50.276711] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:27.803 01:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:39:28.736 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:28.736 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:28.736 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:28.736 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:28.736 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:28.736 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:28.736 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:28.996 "name": "raid_bdev1", 00:39:28.996 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:28.996 "strip_size_kb": 0, 00:39:28.996 "state": "online", 00:39:28.996 "raid_level": "raid1", 00:39:28.996 "superblock": true, 00:39:28.996 "num_base_bdevs": 2, 00:39:28.996 "num_base_bdevs_discovered": 2, 00:39:28.996 "num_base_bdevs_operational": 2, 00:39:28.996 "process": { 00:39:28.996 "type": "rebuild", 00:39:28.996 "target": "spare", 00:39:28.996 "progress": { 00:39:28.996 "blocks": 3072, 00:39:28.996 "percent": 38 00:39:28.996 } 00:39:28.996 }, 00:39:28.996 "base_bdevs_list": [ 00:39:28.996 { 00:39:28.996 "name": "spare", 00:39:28.996 "uuid": "be27ea4d-43e1-5c9c-84f3-6114cc3f6a13", 00:39:28.996 "is_configured": true, 00:39:28.996 "data_offset": 256, 00:39:28.996 "data_size": 7936 00:39:28.996 }, 00:39:28.996 { 00:39:28.996 "name": "BaseBdev2", 00:39:28.996 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:28.996 "is_configured": true, 00:39:28.996 "data_offset": 256, 00:39:28.996 "data_size": 7936 00:39:28.996 } 00:39:28.996 ] 00:39:28.996 }' 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:39:28.996 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1428 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:28.996 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:29.255 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:29.255 "name": "raid_bdev1", 00:39:29.255 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:29.255 "strip_size_kb": 0, 00:39:29.255 "state": "online", 00:39:29.255 "raid_level": "raid1", 00:39:29.255 "superblock": true, 00:39:29.255 "num_base_bdevs": 2, 00:39:29.255 "num_base_bdevs_discovered": 2, 00:39:29.255 "num_base_bdevs_operational": 2, 00:39:29.255 "process": { 00:39:29.255 "type": "rebuild", 00:39:29.255 "target": "spare", 00:39:29.255 "progress": { 00:39:29.255 "blocks": 3840, 00:39:29.255 "percent": 48 00:39:29.255 } 00:39:29.255 }, 00:39:29.255 "base_bdevs_list": [ 00:39:29.255 { 00:39:29.255 "name": "spare", 00:39:29.255 "uuid": "be27ea4d-43e1-5c9c-84f3-6114cc3f6a13", 00:39:29.255 "is_configured": true, 00:39:29.255 "data_offset": 256, 00:39:29.255 "data_size": 7936 00:39:29.255 }, 00:39:29.255 { 00:39:29.255 "name": "BaseBdev2", 00:39:29.255 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:29.255 "is_configured": true, 00:39:29.255 "data_offset": 256, 00:39:29.255 "data_size": 7936 00:39:29.255 } 00:39:29.255 ] 00:39:29.255 }' 00:39:29.255 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:29.513 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:29.513 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:29.513 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:29.513 01:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:30.449 01:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:30.449 01:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:30.449 01:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:30.449 01:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:30.449 01:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:30.449 01:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:30.449 01:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:30.449 01:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:30.708 01:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:30.708 "name": "raid_bdev1", 00:39:30.708 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:30.708 "strip_size_kb": 0, 00:39:30.708 "state": "online", 00:39:30.708 "raid_level": "raid1", 00:39:30.708 "superblock": true, 00:39:30.708 "num_base_bdevs": 2, 00:39:30.708 "num_base_bdevs_discovered": 2, 00:39:30.708 "num_base_bdevs_operational": 2, 00:39:30.708 "process": { 00:39:30.708 "type": "rebuild", 00:39:30.708 "target": "spare", 00:39:30.708 "progress": { 00:39:30.708 "blocks": 7168, 00:39:30.708 "percent": 90 00:39:30.708 } 00:39:30.708 }, 00:39:30.708 "base_bdevs_list": [ 00:39:30.708 { 00:39:30.708 "name": "spare", 00:39:30.708 "uuid": "be27ea4d-43e1-5c9c-84f3-6114cc3f6a13", 00:39:30.708 "is_configured": true, 00:39:30.708 "data_offset": 256, 00:39:30.708 "data_size": 7936 00:39:30.708 }, 00:39:30.708 { 00:39:30.708 "name": "BaseBdev2", 00:39:30.708 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:30.708 "is_configured": true, 00:39:30.708 "data_offset": 256, 00:39:30.708 "data_size": 7936 00:39:30.708 } 00:39:30.708 ] 00:39:30.708 }' 00:39:30.708 01:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:30.708 01:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:30.708 01:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:30.708 01:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:30.708 01:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:30.966 [2024-07-25 01:05:53.394616] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:30.966 [2024-07-25 01:05:53.394687] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:30.966 [2024-07-25 01:05:53.394813] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:31.899 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:31.899 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:31.899 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:31.899 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:31.899 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:31.899 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:31.899 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:31.899 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:31.899 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:31.899 "name": "raid_bdev1", 00:39:31.899 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:31.899 "strip_size_kb": 0, 00:39:31.899 "state": "online", 00:39:31.899 "raid_level": "raid1", 00:39:31.899 "superblock": true, 00:39:31.899 "num_base_bdevs": 2, 00:39:31.899 "num_base_bdevs_discovered": 2, 00:39:31.899 "num_base_bdevs_operational": 2, 00:39:31.899 "base_bdevs_list": [ 00:39:31.899 { 00:39:31.899 "name": "spare", 00:39:31.899 "uuid": "be27ea4d-43e1-5c9c-84f3-6114cc3f6a13", 00:39:31.899 "is_configured": true, 00:39:31.899 "data_offset": 256, 00:39:31.899 "data_size": 7936 00:39:31.899 }, 00:39:31.899 { 00:39:31.899 "name": "BaseBdev2", 00:39:31.899 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:31.899 "is_configured": true, 00:39:31.899 "data_offset": 256, 00:39:31.899 "data_size": 7936 00:39:31.899 } 00:39:31.899 ] 00:39:31.899 }' 00:39:31.899 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:32.157 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:32.157 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:32.157 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:39:32.157 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:39:32.157 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:32.157 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:32.157 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:32.157 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:32.157 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:32.157 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:32.157 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:32.416 "name": "raid_bdev1", 00:39:32.416 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:32.416 "strip_size_kb": 0, 00:39:32.416 "state": "online", 00:39:32.416 "raid_level": "raid1", 00:39:32.416 "superblock": true, 00:39:32.416 "num_base_bdevs": 2, 00:39:32.416 "num_base_bdevs_discovered": 2, 00:39:32.416 "num_base_bdevs_operational": 2, 00:39:32.416 "base_bdevs_list": [ 00:39:32.416 { 00:39:32.416 "name": "spare", 00:39:32.416 "uuid": "be27ea4d-43e1-5c9c-84f3-6114cc3f6a13", 00:39:32.416 "is_configured": true, 00:39:32.416 "data_offset": 256, 00:39:32.416 "data_size": 7936 00:39:32.416 }, 00:39:32.416 { 00:39:32.416 "name": "BaseBdev2", 00:39:32.416 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:32.416 "is_configured": true, 00:39:32.416 "data_offset": 256, 00:39:32.416 "data_size": 7936 00:39:32.416 } 00:39:32.416 ] 00:39:32.416 }' 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:32.416 01:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:32.675 01:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:32.675 "name": "raid_bdev1", 00:39:32.675 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:32.675 "strip_size_kb": 0, 00:39:32.675 "state": "online", 00:39:32.675 "raid_level": "raid1", 00:39:32.675 "superblock": true, 00:39:32.675 "num_base_bdevs": 2, 00:39:32.675 "num_base_bdevs_discovered": 2, 00:39:32.675 "num_base_bdevs_operational": 2, 00:39:32.675 "base_bdevs_list": [ 00:39:32.675 { 00:39:32.675 "name": "spare", 00:39:32.675 "uuid": "be27ea4d-43e1-5c9c-84f3-6114cc3f6a13", 00:39:32.675 "is_configured": true, 00:39:32.675 "data_offset": 256, 00:39:32.675 "data_size": 7936 00:39:32.675 }, 00:39:32.675 { 00:39:32.675 "name": "BaseBdev2", 00:39:32.675 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:32.675 "is_configured": true, 00:39:32.675 "data_offset": 256, 00:39:32.675 "data_size": 7936 00:39:32.675 } 00:39:32.675 ] 00:39:32.675 }' 00:39:32.675 01:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:32.675 01:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:33.241 01:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:33.241 [2024-07-25 01:05:55.795304] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:33.241 [2024-07-25 01:05:55.795345] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:33.241 [2024-07-25 01:05:55.795426] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:33.241 [2024-07-25 01:05:55.795488] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:33.241 [2024-07-25 01:05:55.795497] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:39:33.241 01:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:39:33.242 01:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:33.500 01:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:39:33.500 01:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:39:33.500 01:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:39:33.500 01:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:33.758 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:34.016 [2024-07-25 01:05:56.411381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:34.016 [2024-07-25 01:05:56.411480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:34.016 [2024-07-25 01:05:56.411542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:39:34.016 [2024-07-25 01:05:56.411569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:34.016 [2024-07-25 01:05:56.413618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:34.016 [2024-07-25 01:05:56.413670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:34.016 [2024-07-25 01:05:56.413739] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:34.016 [2024-07-25 01:05:56.413799] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:34.016 [2024-07-25 01:05:56.413923] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:34.016 spare 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:34.016 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:34.016 [2024-07-25 01:05:56.514003] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:39:34.016 [2024-07-25 01:05:56.514025] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:34.016 [2024-07-25 01:05:56.514146] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:39:34.016 [2024-07-25 01:05:56.514243] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:39:34.016 [2024-07-25 01:05:56.514252] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:39:34.016 [2024-07-25 01:05:56.514310] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:34.275 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:34.275 "name": "raid_bdev1", 00:39:34.275 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:34.275 "strip_size_kb": 0, 00:39:34.275 "state": "online", 00:39:34.275 "raid_level": "raid1", 00:39:34.275 "superblock": true, 00:39:34.275 "num_base_bdevs": 2, 00:39:34.275 "num_base_bdevs_discovered": 2, 00:39:34.275 "num_base_bdevs_operational": 2, 00:39:34.275 "base_bdevs_list": [ 00:39:34.275 { 00:39:34.275 "name": "spare", 00:39:34.275 "uuid": "be27ea4d-43e1-5c9c-84f3-6114cc3f6a13", 00:39:34.275 "is_configured": true, 00:39:34.275 "data_offset": 256, 00:39:34.275 "data_size": 7936 00:39:34.275 }, 00:39:34.275 { 00:39:34.275 "name": "BaseBdev2", 00:39:34.275 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:34.275 "is_configured": true, 00:39:34.275 "data_offset": 256, 00:39:34.275 "data_size": 7936 00:39:34.275 } 00:39:34.275 ] 00:39:34.275 }' 00:39:34.275 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:34.275 01:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:34.535 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:34.535 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:34.535 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:34.535 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:34.535 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:34.535 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:34.535 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:34.794 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:34.794 "name": "raid_bdev1", 00:39:34.794 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:34.794 "strip_size_kb": 0, 00:39:34.794 "state": "online", 00:39:34.794 "raid_level": "raid1", 00:39:34.794 "superblock": true, 00:39:34.794 "num_base_bdevs": 2, 00:39:34.794 "num_base_bdevs_discovered": 2, 00:39:34.794 "num_base_bdevs_operational": 2, 00:39:34.794 "base_bdevs_list": [ 00:39:34.794 { 00:39:34.794 "name": "spare", 00:39:34.794 "uuid": "be27ea4d-43e1-5c9c-84f3-6114cc3f6a13", 00:39:34.794 "is_configured": true, 00:39:34.794 "data_offset": 256, 00:39:34.794 "data_size": 7936 00:39:34.794 }, 00:39:34.794 { 00:39:34.794 "name": "BaseBdev2", 00:39:34.794 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:34.794 "is_configured": true, 00:39:34.794 "data_offset": 256, 00:39:34.794 "data_size": 7936 00:39:34.794 } 00:39:34.794 ] 00:39:34.794 }' 00:39:34.794 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:34.794 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:34.794 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:34.794 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:34.794 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:34.794 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:35.052 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:39:35.052 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:39:35.311 [2024-07-25 01:05:57.747721] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:35.311 01:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:35.570 01:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:35.570 "name": "raid_bdev1", 00:39:35.570 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:35.570 "strip_size_kb": 0, 00:39:35.570 "state": "online", 00:39:35.570 "raid_level": "raid1", 00:39:35.570 "superblock": true, 00:39:35.570 "num_base_bdevs": 2, 00:39:35.570 "num_base_bdevs_discovered": 1, 00:39:35.570 "num_base_bdevs_operational": 1, 00:39:35.570 "base_bdevs_list": [ 00:39:35.570 { 00:39:35.570 "name": null, 00:39:35.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:35.570 "is_configured": false, 00:39:35.570 "data_offset": 256, 00:39:35.570 "data_size": 7936 00:39:35.570 }, 00:39:35.570 { 00:39:35.570 "name": "BaseBdev2", 00:39:35.570 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:35.570 "is_configured": true, 00:39:35.570 "data_offset": 256, 00:39:35.570 "data_size": 7936 00:39:35.570 } 00:39:35.570 ] 00:39:35.570 }' 00:39:35.570 01:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:35.570 01:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:36.138 01:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:36.138 [2024-07-25 01:05:58.719910] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:36.138 [2024-07-25 01:05:58.720100] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:36.138 [2024-07-25 01:05:58.720113] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:36.138 [2024-07-25 01:05:58.720210] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:36.138 [2024-07-25 01:05:58.735595] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:39:36.138 [2024-07-25 01:05:58.737516] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:36.139 01:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:39:37.516 01:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:37.516 01:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:37.516 01:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:37.516 01:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:37.516 01:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:37.516 01:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:37.516 01:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:37.516 01:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:37.516 "name": "raid_bdev1", 00:39:37.516 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:37.516 "strip_size_kb": 0, 00:39:37.516 "state": "online", 00:39:37.516 "raid_level": "raid1", 00:39:37.516 "superblock": true, 00:39:37.516 "num_base_bdevs": 2, 00:39:37.516 "num_base_bdevs_discovered": 2, 00:39:37.516 "num_base_bdevs_operational": 2, 00:39:37.516 "process": { 00:39:37.516 "type": "rebuild", 00:39:37.516 "target": "spare", 00:39:37.516 "progress": { 00:39:37.516 "blocks": 3072, 00:39:37.516 "percent": 38 00:39:37.516 } 00:39:37.516 }, 00:39:37.516 "base_bdevs_list": [ 00:39:37.516 { 00:39:37.516 "name": "spare", 00:39:37.516 "uuid": "be27ea4d-43e1-5c9c-84f3-6114cc3f6a13", 00:39:37.516 "is_configured": true, 00:39:37.516 "data_offset": 256, 00:39:37.516 "data_size": 7936 00:39:37.516 }, 00:39:37.516 { 00:39:37.516 "name": "BaseBdev2", 00:39:37.516 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:37.516 "is_configured": true, 00:39:37.516 "data_offset": 256, 00:39:37.516 "data_size": 7936 00:39:37.516 } 00:39:37.516 ] 00:39:37.516 }' 00:39:37.516 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:37.516 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:37.516 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:37.516 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:37.516 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:37.775 [2024-07-25 01:06:00.339186] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:37.775 [2024-07-25 01:06:00.346852] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:37.775 [2024-07-25 01:06:00.346931] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:37.775 [2024-07-25 01:06:00.346946] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:37.775 [2024-07-25 01:06:00.346953] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:37.775 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:38.034 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:38.034 "name": "raid_bdev1", 00:39:38.034 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:38.034 "strip_size_kb": 0, 00:39:38.034 "state": "online", 00:39:38.034 "raid_level": "raid1", 00:39:38.034 "superblock": true, 00:39:38.034 "num_base_bdevs": 2, 00:39:38.034 "num_base_bdevs_discovered": 1, 00:39:38.034 "num_base_bdevs_operational": 1, 00:39:38.034 "base_bdevs_list": [ 00:39:38.034 { 00:39:38.034 "name": null, 00:39:38.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:38.034 "is_configured": false, 00:39:38.034 "data_offset": 256, 00:39:38.034 "data_size": 7936 00:39:38.034 }, 00:39:38.034 { 00:39:38.034 "name": "BaseBdev2", 00:39:38.034 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:38.034 "is_configured": true, 00:39:38.034 "data_offset": 256, 00:39:38.034 "data_size": 7936 00:39:38.034 } 00:39:38.034 ] 00:39:38.034 }' 00:39:38.034 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:38.034 01:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:38.601 01:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:38.859 [2024-07-25 01:06:01.374528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:38.859 [2024-07-25 01:06:01.374610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:38.859 [2024-07-25 01:06:01.374644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:39:38.859 [2024-07-25 01:06:01.374669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:38.859 [2024-07-25 01:06:01.374872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:38.859 [2024-07-25 01:06:01.374901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:38.859 [2024-07-25 01:06:01.374988] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:38.859 [2024-07-25 01:06:01.375000] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:38.859 [2024-07-25 01:06:01.375009] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:38.859 [2024-07-25 01:06:01.375045] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:38.859 [2024-07-25 01:06:01.390317] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:39:38.859 spare 00:39:38.859 [2024-07-25 01:06:01.392245] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:38.859 01:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:39:39.792 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:39.792 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:39.792 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:39.792 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:39.792 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:39.792 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:39.792 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:40.050 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:40.050 "name": "raid_bdev1", 00:39:40.050 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:40.050 "strip_size_kb": 0, 00:39:40.050 "state": "online", 00:39:40.050 "raid_level": "raid1", 00:39:40.050 "superblock": true, 00:39:40.050 "num_base_bdevs": 2, 00:39:40.050 "num_base_bdevs_discovered": 2, 00:39:40.050 "num_base_bdevs_operational": 2, 00:39:40.050 "process": { 00:39:40.050 "type": "rebuild", 00:39:40.050 "target": "spare", 00:39:40.050 "progress": { 00:39:40.050 "blocks": 3072, 00:39:40.050 "percent": 38 00:39:40.050 } 00:39:40.050 }, 00:39:40.050 "base_bdevs_list": [ 00:39:40.050 { 00:39:40.050 "name": "spare", 00:39:40.050 "uuid": "be27ea4d-43e1-5c9c-84f3-6114cc3f6a13", 00:39:40.050 "is_configured": true, 00:39:40.050 "data_offset": 256, 00:39:40.050 "data_size": 7936 00:39:40.050 }, 00:39:40.050 { 00:39:40.050 "name": "BaseBdev2", 00:39:40.050 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:40.050 "is_configured": true, 00:39:40.050 "data_offset": 256, 00:39:40.050 "data_size": 7936 00:39:40.050 } 00:39:40.050 ] 00:39:40.050 }' 00:39:40.050 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:40.050 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:40.050 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:40.309 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:40.309 01:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:40.309 [2024-07-25 01:06:02.905821] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:40.568 [2024-07-25 01:06:03.001546] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:40.568 [2024-07-25 01:06:03.001785] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:40.568 [2024-07-25 01:06:03.001836] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:40.568 [2024-07-25 01:06:03.001923] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:40.568 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:40.827 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:40.827 "name": "raid_bdev1", 00:39:40.827 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:40.827 "strip_size_kb": 0, 00:39:40.827 "state": "online", 00:39:40.827 "raid_level": "raid1", 00:39:40.827 "superblock": true, 00:39:40.827 "num_base_bdevs": 2, 00:39:40.827 "num_base_bdevs_discovered": 1, 00:39:40.827 "num_base_bdevs_operational": 1, 00:39:40.827 "base_bdevs_list": [ 00:39:40.827 { 00:39:40.827 "name": null, 00:39:40.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:40.827 "is_configured": false, 00:39:40.827 "data_offset": 256, 00:39:40.827 "data_size": 7936 00:39:40.827 }, 00:39:40.827 { 00:39:40.827 "name": "BaseBdev2", 00:39:40.827 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:40.827 "is_configured": true, 00:39:40.827 "data_offset": 256, 00:39:40.827 "data_size": 7936 00:39:40.827 } 00:39:40.827 ] 00:39:40.827 }' 00:39:40.827 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:40.827 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:41.393 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:41.393 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:41.393 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:41.393 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:41.393 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:41.393 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:41.393 01:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:41.651 01:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:41.651 "name": "raid_bdev1", 00:39:41.651 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:41.651 "strip_size_kb": 0, 00:39:41.651 "state": "online", 00:39:41.651 "raid_level": "raid1", 00:39:41.651 "superblock": true, 00:39:41.651 "num_base_bdevs": 2, 00:39:41.651 "num_base_bdevs_discovered": 1, 00:39:41.651 "num_base_bdevs_operational": 1, 00:39:41.651 "base_bdevs_list": [ 00:39:41.651 { 00:39:41.651 "name": null, 00:39:41.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:41.651 "is_configured": false, 00:39:41.651 "data_offset": 256, 00:39:41.651 "data_size": 7936 00:39:41.651 }, 00:39:41.651 { 00:39:41.651 "name": "BaseBdev2", 00:39:41.651 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:41.651 "is_configured": true, 00:39:41.651 "data_offset": 256, 00:39:41.651 "data_size": 7936 00:39:41.651 } 00:39:41.651 ] 00:39:41.651 }' 00:39:41.651 01:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:41.651 01:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:41.651 01:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:41.651 01:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:41.651 01:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:39:41.909 01:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:42.168 [2024-07-25 01:06:04.721713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:42.168 [2024-07-25 01:06:04.721981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:42.168 [2024-07-25 01:06:04.722054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:39:42.168 [2024-07-25 01:06:04.722145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:42.168 [2024-07-25 01:06:04.722360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:42.168 [2024-07-25 01:06:04.722566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:42.168 [2024-07-25 01:06:04.722679] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:42.168 [2024-07-25 01:06:04.722761] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:42.168 [2024-07-25 01:06:04.722935] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:42.168 BaseBdev1 00:39:42.168 01:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:43.148 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:43.408 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:43.408 "name": "raid_bdev1", 00:39:43.408 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:43.408 "strip_size_kb": 0, 00:39:43.408 "state": "online", 00:39:43.408 "raid_level": "raid1", 00:39:43.408 "superblock": true, 00:39:43.408 "num_base_bdevs": 2, 00:39:43.408 "num_base_bdevs_discovered": 1, 00:39:43.408 "num_base_bdevs_operational": 1, 00:39:43.408 "base_bdevs_list": [ 00:39:43.408 { 00:39:43.408 "name": null, 00:39:43.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:43.408 "is_configured": false, 00:39:43.408 "data_offset": 256, 00:39:43.408 "data_size": 7936 00:39:43.408 }, 00:39:43.408 { 00:39:43.408 "name": "BaseBdev2", 00:39:43.408 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:43.408 "is_configured": true, 00:39:43.408 "data_offset": 256, 00:39:43.408 "data_size": 7936 00:39:43.408 } 00:39:43.408 ] 00:39:43.408 }' 00:39:43.408 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:43.408 01:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:43.976 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:43.977 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:43.977 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:43.977 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:43.977 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:43.977 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:43.977 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:44.236 "name": "raid_bdev1", 00:39:44.236 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:44.236 "strip_size_kb": 0, 00:39:44.236 "state": "online", 00:39:44.236 "raid_level": "raid1", 00:39:44.236 "superblock": true, 00:39:44.236 "num_base_bdevs": 2, 00:39:44.236 "num_base_bdevs_discovered": 1, 00:39:44.236 "num_base_bdevs_operational": 1, 00:39:44.236 "base_bdevs_list": [ 00:39:44.236 { 00:39:44.236 "name": null, 00:39:44.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:44.236 "is_configured": false, 00:39:44.236 "data_offset": 256, 00:39:44.236 "data_size": 7936 00:39:44.236 }, 00:39:44.236 { 00:39:44.236 "name": "BaseBdev2", 00:39:44.236 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:44.236 "is_configured": true, 00:39:44.236 "data_offset": 256, 00:39:44.236 "data_size": 7936 00:39:44.236 } 00:39:44.236 ] 00:39:44.236 }' 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:44.236 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:44.495 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:44.495 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:44.495 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:44.495 01:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:44.495 [2024-07-25 01:06:07.054224] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:44.495 [2024-07-25 01:06:07.054399] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:44.495 [2024-07-25 01:06:07.054410] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:44.495 request: 00:39:44.495 { 00:39:44.495 "base_bdev": "BaseBdev1", 00:39:44.495 "raid_bdev": "raid_bdev1", 00:39:44.495 "method": "bdev_raid_add_base_bdev", 00:39:44.495 "req_id": 1 00:39:44.495 } 00:39:44.495 Got JSON-RPC error response 00:39:44.495 response: 00:39:44.495 { 00:39:44.495 "code": -22, 00:39:44.495 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:39:44.496 } 00:39:44.496 01:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:39:44.496 01:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:44.496 01:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:44.496 01:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:44.496 01:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:45.433 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:45.692 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:45.692 "name": "raid_bdev1", 00:39:45.692 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:45.692 "strip_size_kb": 0, 00:39:45.692 "state": "online", 00:39:45.692 "raid_level": "raid1", 00:39:45.692 "superblock": true, 00:39:45.692 "num_base_bdevs": 2, 00:39:45.692 "num_base_bdevs_discovered": 1, 00:39:45.692 "num_base_bdevs_operational": 1, 00:39:45.692 "base_bdevs_list": [ 00:39:45.692 { 00:39:45.692 "name": null, 00:39:45.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:45.692 "is_configured": false, 00:39:45.692 "data_offset": 256, 00:39:45.692 "data_size": 7936 00:39:45.692 }, 00:39:45.692 { 00:39:45.692 "name": "BaseBdev2", 00:39:45.692 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:45.692 "is_configured": true, 00:39:45.692 "data_offset": 256, 00:39:45.692 "data_size": 7936 00:39:45.692 } 00:39:45.692 ] 00:39:45.693 }' 00:39:45.693 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:45.693 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:46.630 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:46.630 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:46.630 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:46.630 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:46.630 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:46.630 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:46.630 01:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:46.630 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:46.630 "name": "raid_bdev1", 00:39:46.630 "uuid": "4d3d74ba-7812-491e-8081-aae42bc54fbc", 00:39:46.630 "strip_size_kb": 0, 00:39:46.630 "state": "online", 00:39:46.630 "raid_level": "raid1", 00:39:46.630 "superblock": true, 00:39:46.630 "num_base_bdevs": 2, 00:39:46.630 "num_base_bdevs_discovered": 1, 00:39:46.630 "num_base_bdevs_operational": 1, 00:39:46.630 "base_bdevs_list": [ 00:39:46.630 { 00:39:46.630 "name": null, 00:39:46.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:46.630 "is_configured": false, 00:39:46.630 "data_offset": 256, 00:39:46.630 "data_size": 7936 00:39:46.630 }, 00:39:46.630 { 00:39:46.630 "name": "BaseBdev2", 00:39:46.630 "uuid": "e0ec1d2b-8081-5db8-adb1-b1d61b41bcd3", 00:39:46.630 "is_configured": true, 00:39:46.630 "data_offset": 256, 00:39:46.630 "data_size": 7936 00:39:46.630 } 00:39:46.630 ] 00:39:46.630 }' 00:39:46.630 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:46.631 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:46.631 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 163516 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 163516 ']' 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 163516 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 163516 00:39:46.890 killing process with pid 163516 00:39:46.890 Received shutdown signal, test time was about 60.000000 seconds 00:39:46.890 00:39:46.890 Latency(us) 00:39:46.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:46.890 =================================================================================================================== 00:39:46.890 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 163516' 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 163516 00:39:46.890 01:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 163516 00:39:46.890 [2024-07-25 01:06:09.318443] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:46.890 [2024-07-25 01:06:09.318546] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:46.890 [2024-07-25 01:06:09.318592] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:46.890 [2024-07-25 01:06:09.318601] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:39:47.149 [2024-07-25 01:06:09.599398] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:48.528 ************************************ 00:39:48.528 END TEST raid_rebuild_test_sb_md_interleaved 00:39:48.528 ************************************ 00:39:48.528 01:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:39:48.528 00:39:48.528 real 0m28.342s 00:39:48.528 user 0m44.190s 00:39:48.528 sys 0m3.219s 00:39:48.528 01:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:48.528 01:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:48.528 01:06:10 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:39:48.528 01:06:10 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:39:48.528 01:06:10 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 163516 ']' 00:39:48.528 01:06:10 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 163516 00:39:48.528 01:06:10 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:39:48.528 ************************************ 00:39:48.528 END TEST bdev_raid 00:39:48.528 ************************************ 00:39:48.528 00:39:48.528 real 23m37.755s 00:39:48.528 user 38m52.934s 00:39:48.528 sys 3m31.764s 00:39:48.528 01:06:10 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:48.528 01:06:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:48.528 01:06:10 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:39:48.528 01:06:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:48.528 01:06:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:48.528 01:06:10 -- common/autotest_common.sh@10 -- # set +x 00:39:48.528 ************************************ 00:39:48.528 START TEST bdevperf_config 00:39:48.528 ************************************ 00:39:48.528 01:06:10 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:39:48.528 * Looking for test storage... 00:39:48.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:48.528 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:39:48.528 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:39:48.528 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:39:48.528 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:39:48.528 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:48.528 01:06:11 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:53.804 01:06:15 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-25 01:06:11.167462] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:39:53.804 [2024-07-25 01:06:11.167661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164341 ] 00:39:53.804 Using job config with 4 jobs 00:39:53.804 [2024-07-25 01:06:11.347728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.804 [2024-07-25 01:06:11.556756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.804 cpumask for '\''job0'\'' is too big 00:39:53.804 cpumask for '\''job1'\'' is too big 00:39:53.804 cpumask for '\''job2'\'' is too big 00:39:53.804 cpumask for '\''job3'\'' is too big 00:39:53.804 Running I/O for 2 seconds... 00:39:53.804 00:39:53.804 Latency(us) 00:39:53.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.01 33857.62 33.06 0.00 0.00 7555.35 1380.94 11983.73 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.01 33836.40 33.04 0.00 0.00 7547.16 1373.14 10610.59 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.01 33816.01 33.02 0.00 0.00 7539.39 1373.14 9175.04 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.02 33890.84 33.10 0.00 0.00 7510.30 635.86 8301.23 00:39:53.804 =================================================================================================================== 00:39:53.804 Total : 135400.87 132.23 0.00 0.00 7538.03 635.86 11983.73' 00:39:53.804 01:06:15 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-25 01:06:11.167462] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:39:53.804 [2024-07-25 01:06:11.167661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164341 ] 00:39:53.804 Using job config with 4 jobs 00:39:53.804 [2024-07-25 01:06:11.347728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.804 [2024-07-25 01:06:11.556756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.804 cpumask for '\''job0'\'' is too big 00:39:53.804 cpumask for '\''job1'\'' is too big 00:39:53.804 cpumask for '\''job2'\'' is too big 00:39:53.804 cpumask for '\''job3'\'' is too big 00:39:53.804 Running I/O for 2 seconds... 00:39:53.804 00:39:53.804 Latency(us) 00:39:53.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.01 33857.62 33.06 0.00 0.00 7555.35 1380.94 11983.73 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.01 33836.40 33.04 0.00 0.00 7547.16 1373.14 10610.59 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.01 33816.01 33.02 0.00 0.00 7539.39 1373.14 9175.04 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.02 33890.84 33.10 0.00 0.00 7510.30 635.86 8301.23 00:39:53.804 =================================================================================================================== 00:39:53.804 Total : 135400.87 132.23 0.00 0.00 7538.03 635.86 11983.73' 00:39:53.804 01:06:15 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 01:06:11.167462] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:39:53.804 [2024-07-25 01:06:11.167661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164341 ] 00:39:53.804 Using job config with 4 jobs 00:39:53.804 [2024-07-25 01:06:11.347728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.804 [2024-07-25 01:06:11.556756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.804 cpumask for '\''job0'\'' is too big 00:39:53.804 cpumask for '\''job1'\'' is too big 00:39:53.804 cpumask for '\''job2'\'' is too big 00:39:53.804 cpumask for '\''job3'\'' is too big 00:39:53.804 Running I/O for 2 seconds... 00:39:53.804 00:39:53.804 Latency(us) 00:39:53.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.01 33857.62 33.06 0.00 0.00 7555.35 1380.94 11983.73 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.01 33836.40 33.04 0.00 0.00 7547.16 1373.14 10610.59 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.01 33816.01 33.02 0.00 0.00 7539.39 1373.14 9175.04 00:39:53.804 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:53.804 Malloc0 : 2.02 33890.84 33.10 0.00 0.00 7510.30 635.86 8301.23 00:39:53.804 =================================================================================================================== 00:39:53.804 Total : 135400.87 132.23 0.00 0.00 7538.03 635.86 11983.73' 00:39:53.804 01:06:15 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:39:53.804 01:06:15 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:39:53.804 01:06:15 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:39:53.804 01:06:15 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:53.804 [2024-07-25 01:06:15.607530] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:39:53.804 [2024-07-25 01:06:15.608372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164403 ] 00:39:53.804 [2024-07-25 01:06:15.787728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.804 [2024-07-25 01:06:16.005121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.804 cpumask for 'job0' is too big 00:39:53.804 cpumask for 'job1' is too big 00:39:53.804 cpumask for 'job2' is too big 00:39:53.804 cpumask for 'job3' is too big 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:39:57.993 Running I/O for 2 seconds... 00:39:57.993 00:39:57.993 Latency(us) 00:39:57.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:57.993 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:57.993 Malloc0 : 2.01 34247.54 33.44 0.00 0.00 7469.40 1388.74 11546.82 00:39:57.993 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:57.993 Malloc0 : 2.01 34226.20 33.42 0.00 0.00 7461.28 1302.92 10173.68 00:39:57.993 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:57.993 Malloc0 : 2.01 34205.59 33.40 0.00 0.00 7454.26 1318.52 8862.96 00:39:57.993 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:57.993 Malloc0 : 2.02 34279.12 33.48 0.00 0.00 7426.67 670.96 8051.57 00:39:57.993 =================================================================================================================== 00:39:57.993 Total : 136958.45 133.75 0.00 0.00 7452.88 670.96 11546.82' 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:57.993 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:39:57.993 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:57.993 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:57.993 01:06:19 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:40:02.173 01:06:24 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-25 01:06:20.070944] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:02.173 [2024-07-25 01:06:20.071157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164460 ] 00:40:02.173 Using job config with 3 jobs 00:40:02.173 [2024-07-25 01:06:20.250919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.173 [2024-07-25 01:06:20.454060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:02.173 cpumask for '\''job0'\'' is too big 00:40:02.174 cpumask for '\''job1'\'' is too big 00:40:02.174 cpumask for '\''job2'\'' is too big 00:40:02.174 Running I/O for 2 seconds... 00:40:02.174 00:40:02.174 Latency(us) 00:40:02.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:02.174 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:40:02.174 Malloc0 : 2.01 44685.80 43.64 0.00 0.00 5723.90 1396.54 8738.13 00:40:02.174 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:40:02.174 Malloc0 : 2.01 44655.98 43.61 0.00 0.00 5718.07 1334.13 7177.75 00:40:02.174 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:40:02.174 Malloc0 : 2.01 44626.55 43.58 0.00 0.00 5712.82 1334.13 6335.15 00:40:02.174 =================================================================================================================== 00:40:02.174 Total : 133968.33 130.83 0.00 0.00 5718.26 1334.13 8738.13' 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-25 01:06:20.070944] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:02.174 [2024-07-25 01:06:20.071157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164460 ] 00:40:02.174 Using job config with 3 jobs 00:40:02.174 [2024-07-25 01:06:20.250919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.174 [2024-07-25 01:06:20.454060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:02.174 cpumask for '\''job0'\'' is too big 00:40:02.174 cpumask for '\''job1'\'' is too big 00:40:02.174 cpumask for '\''job2'\'' is too big 00:40:02.174 Running I/O for 2 seconds... 00:40:02.174 00:40:02.174 Latency(us) 00:40:02.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:02.174 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:40:02.174 Malloc0 : 2.01 44685.80 43.64 0.00 0.00 5723.90 1396.54 8738.13 00:40:02.174 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:40:02.174 Malloc0 : 2.01 44655.98 43.61 0.00 0.00 5718.07 1334.13 7177.75 00:40:02.174 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:40:02.174 Malloc0 : 2.01 44626.55 43.58 0.00 0.00 5712.82 1334.13 6335.15 00:40:02.174 =================================================================================================================== 00:40:02.174 Total : 133968.33 130.83 0.00 0.00 5718.26 1334.13 8738.13' 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 01:06:20.070944] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:02.174 [2024-07-25 01:06:20.071157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164460 ] 00:40:02.174 Using job config with 3 jobs 00:40:02.174 [2024-07-25 01:06:20.250919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.174 [2024-07-25 01:06:20.454060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:02.174 cpumask for '\''job0'\'' is too big 00:40:02.174 cpumask for '\''job1'\'' is too big 00:40:02.174 cpumask for '\''job2'\'' is too big 00:40:02.174 Running I/O for 2 seconds... 00:40:02.174 00:40:02.174 Latency(us) 00:40:02.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:02.174 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:40:02.174 Malloc0 : 2.01 44685.80 43.64 0.00 0.00 5723.90 1396.54 8738.13 00:40:02.174 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:40:02.174 Malloc0 : 2.01 44655.98 43.61 0.00 0.00 5718.07 1334.13 7177.75 00:40:02.174 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:40:02.174 Malloc0 : 2.01 44626.55 43.58 0.00 0.00 5712.82 1334.13 6335.15 00:40:02.174 =================================================================================================================== 00:40:02.174 Total : 133968.33 130.83 0.00 0.00 5718.26 1334.13 8738.13' 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:40:02.174 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:40:02.174 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:40:02.174 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:40:02.174 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:40:02.174 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:40:02.174 01:06:24 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:40:06.362 01:06:28 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-25 01:06:24.503727] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:06.362 [2024-07-25 01:06:24.503945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164519 ] 00:40:06.362 Using job config with 4 jobs 00:40:06.362 [2024-07-25 01:06:24.683979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.362 [2024-07-25 01:06:24.889621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.362 cpumask for '\''job0'\'' is too big 00:40:06.362 cpumask for '\''job1'\'' is too big 00:40:06.362 cpumask for '\''job2'\'' is too big 00:40:06.362 cpumask for '\''job3'\'' is too big 00:40:06.362 Running I/O for 2 seconds... 00:40:06.362 00:40:06.362 Latency(us) 00:40:06.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.362 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc0 : 2.02 16583.72 16.20 0.00 0.00 15427.12 2902.31 23592.96 00:40:06.362 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc1 : 2.02 16573.09 16.18 0.00 0.00 15423.51 3370.42 23592.96 00:40:06.362 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc0 : 2.02 16562.99 16.17 0.00 0.00 15393.74 2808.69 20721.86 00:40:06.362 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc1 : 2.03 16552.71 16.16 0.00 0.00 15392.21 3354.82 20721.86 00:40:06.362 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc0 : 2.03 16542.64 16.15 0.00 0.00 15361.36 2730.67 17975.59 00:40:06.362 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc1 : 2.03 16612.74 16.22 0.00 0.00 15285.79 3323.61 17975.59 00:40:06.362 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc0 : 2.04 16602.66 16.21 0.00 0.00 15258.46 2746.27 16352.79 00:40:06.362 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc1 : 2.04 16592.29 16.20 0.00 0.00 15257.80 3198.78 16477.62 00:40:06.362 =================================================================================================================== 00:40:06.362 Total : 132622.85 129.51 0.00 0.00 15349.76 2730.67 23592.96' 00:40:06.362 01:06:28 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-25 01:06:24.503727] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:06.362 [2024-07-25 01:06:24.503945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164519 ] 00:40:06.362 Using job config with 4 jobs 00:40:06.362 [2024-07-25 01:06:24.683979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.362 [2024-07-25 01:06:24.889621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.362 cpumask for '\''job0'\'' is too big 00:40:06.362 cpumask for '\''job1'\'' is too big 00:40:06.362 cpumask for '\''job2'\'' is too big 00:40:06.362 cpumask for '\''job3'\'' is too big 00:40:06.362 Running I/O for 2 seconds... 00:40:06.362 00:40:06.362 Latency(us) 00:40:06.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.362 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc0 : 2.02 16583.72 16.20 0.00 0.00 15427.12 2902.31 23592.96 00:40:06.362 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc1 : 2.02 16573.09 16.18 0.00 0.00 15423.51 3370.42 23592.96 00:40:06.362 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc0 : 2.02 16562.99 16.17 0.00 0.00 15393.74 2808.69 20721.86 00:40:06.362 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc1 : 2.03 16552.71 16.16 0.00 0.00 15392.21 3354.82 20721.86 00:40:06.362 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc0 : 2.03 16542.64 16.15 0.00 0.00 15361.36 2730.67 17975.59 00:40:06.362 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc1 : 2.03 16612.74 16.22 0.00 0.00 15285.79 3323.61 17975.59 00:40:06.362 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc0 : 2.04 16602.66 16.21 0.00 0.00 15258.46 2746.27 16352.79 00:40:06.362 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.362 Malloc1 : 2.04 16592.29 16.20 0.00 0.00 15257.80 3198.78 16477.62 00:40:06.362 =================================================================================================================== 00:40:06.362 Total : 132622.85 129.51 0.00 0.00 15349.76 2730.67 23592.96' 00:40:06.363 01:06:28 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 01:06:24.503727] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:06.363 [2024-07-25 01:06:24.503945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164519 ] 00:40:06.363 Using job config with 4 jobs 00:40:06.363 [2024-07-25 01:06:24.683979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.363 [2024-07-25 01:06:24.889621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.363 cpumask for '\''job0'\'' is too big 00:40:06.363 cpumask for '\''job1'\'' is too big 00:40:06.363 cpumask for '\''job2'\'' is too big 00:40:06.363 cpumask for '\''job3'\'' is too big 00:40:06.363 Running I/O for 2 seconds... 00:40:06.363 00:40:06.363 Latency(us) 00:40:06.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.363 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.363 Malloc0 : 2.02 16583.72 16.20 0.00 0.00 15427.12 2902.31 23592.96 00:40:06.363 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.363 Malloc1 : 2.02 16573.09 16.18 0.00 0.00 15423.51 3370.42 23592.96 00:40:06.363 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.363 Malloc0 : 2.02 16562.99 16.17 0.00 0.00 15393.74 2808.69 20721.86 00:40:06.363 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.363 Malloc1 : 2.03 16552.71 16.16 0.00 0.00 15392.21 3354.82 20721.86 00:40:06.363 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.363 Malloc0 : 2.03 16542.64 16.15 0.00 0.00 15361.36 2730.67 17975.59 00:40:06.363 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.363 Malloc1 : 2.03 16612.74 16.22 0.00 0.00 15285.79 3323.61 17975.59 00:40:06.363 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.363 Malloc0 : 2.04 16602.66 16.21 0.00 0.00 15258.46 2746.27 16352.79 00:40:06.363 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:40:06.363 Malloc1 : 2.04 16592.29 16.20 0.00 0.00 15257.80 3198.78 16477.62 00:40:06.363 =================================================================================================================== 00:40:06.363 Total : 132622.85 129.51 0.00 0.00 15349.76 2730.67 23592.96' 00:40:06.363 01:06:28 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:40:06.363 01:06:28 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:40:06.363 01:06:28 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:40:06.363 01:06:28 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:40:06.363 01:06:28 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:40:06.363 01:06:28 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:40:06.363 00:40:06.363 real 0m17.931s 00:40:06.363 user 0m16.175s 00:40:06.363 sys 0m1.198s 00:40:06.363 01:06:28 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:06.363 01:06:28 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:40:06.363 ************************************ 00:40:06.363 END TEST bdevperf_config 00:40:06.363 ************************************ 00:40:06.363 01:06:28 -- spdk/autotest.sh@192 -- # uname -s 00:40:06.363 01:06:28 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:40:06.363 01:06:28 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:40:06.363 01:06:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:06.363 01:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:06.363 01:06:28 -- common/autotest_common.sh@10 -- # set +x 00:40:06.363 ************************************ 00:40:06.363 START TEST reactor_set_interrupt 00:40:06.363 ************************************ 00:40:06.363 01:06:28 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:40:06.624 * Looking for test storage... 00:40:06.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:06.624 01:06:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:40:06.624 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:40:06.624 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:06.624 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:40:06.624 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:40:06.624 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:40:06.624 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:40:06.624 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:40:06.624 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:40:06.624 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:40:06.624 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:40:06.624 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:40:06.624 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:40:06.624 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:40:06.624 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:40:06.624 01:06:29 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:40:06.625 01:06:29 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:40:06.625 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:40:06.625 #define SPDK_CONFIG_H 00:40:06.625 #define SPDK_CONFIG_APPS 1 00:40:06.625 #define SPDK_CONFIG_ARCH native 00:40:06.625 #define SPDK_CONFIG_ASAN 1 00:40:06.625 #undef SPDK_CONFIG_AVAHI 00:40:06.625 #undef SPDK_CONFIG_CET 00:40:06.625 #define SPDK_CONFIG_COVERAGE 1 00:40:06.625 #define SPDK_CONFIG_CROSS_PREFIX 00:40:06.625 #undef SPDK_CONFIG_CRYPTO 00:40:06.625 #undef SPDK_CONFIG_CRYPTO_MLX5 00:40:06.625 #undef SPDK_CONFIG_CUSTOMOCF 00:40:06.625 #undef SPDK_CONFIG_DAOS 00:40:06.625 #define SPDK_CONFIG_DAOS_DIR 00:40:06.625 #define SPDK_CONFIG_DEBUG 1 00:40:06.625 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:40:06.625 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:40:06.625 #define SPDK_CONFIG_DPDK_INC_DIR 00:40:06.625 #define SPDK_CONFIG_DPDK_LIB_DIR 00:40:06.625 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:40:06.625 #undef SPDK_CONFIG_DPDK_UADK 00:40:06.625 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:40:06.625 #define SPDK_CONFIG_EXAMPLES 1 00:40:06.625 #undef SPDK_CONFIG_FC 00:40:06.625 #define SPDK_CONFIG_FC_PATH 00:40:06.625 #define SPDK_CONFIG_FIO_PLUGIN 1 00:40:06.625 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:40:06.625 #undef SPDK_CONFIG_FUSE 00:40:06.625 #undef SPDK_CONFIG_FUZZER 00:40:06.625 #define SPDK_CONFIG_FUZZER_LIB 00:40:06.625 #undef SPDK_CONFIG_GOLANG 00:40:06.625 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:40:06.625 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:40:06.625 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:40:06.625 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:40:06.625 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:40:06.625 #undef SPDK_CONFIG_HAVE_LIBBSD 00:40:06.625 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:40:06.625 #define SPDK_CONFIG_IDXD 1 00:40:06.625 #undef SPDK_CONFIG_IDXD_KERNEL 00:40:06.625 #undef SPDK_CONFIG_IPSEC_MB 00:40:06.625 #define SPDK_CONFIG_IPSEC_MB_DIR 00:40:06.625 #define SPDK_CONFIG_ISAL 1 00:40:06.625 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:40:06.625 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:40:06.625 #define SPDK_CONFIG_LIBDIR 00:40:06.625 #undef SPDK_CONFIG_LTO 00:40:06.625 #define SPDK_CONFIG_MAX_LCORES 128 00:40:06.625 #define SPDK_CONFIG_NVME_CUSE 1 00:40:06.625 #undef SPDK_CONFIG_OCF 00:40:06.625 #define SPDK_CONFIG_OCF_PATH 00:40:06.625 #define SPDK_CONFIG_OPENSSL_PATH 00:40:06.625 #undef SPDK_CONFIG_PGO_CAPTURE 00:40:06.625 #define SPDK_CONFIG_PGO_DIR 00:40:06.625 #undef SPDK_CONFIG_PGO_USE 00:40:06.625 #define SPDK_CONFIG_PREFIX /usr/local 00:40:06.625 #define SPDK_CONFIG_RAID5F 1 00:40:06.625 #undef SPDK_CONFIG_RBD 00:40:06.625 #define SPDK_CONFIG_RDMA 1 00:40:06.625 #define SPDK_CONFIG_RDMA_PROV verbs 00:40:06.625 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:40:06.625 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:40:06.625 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:40:06.625 #undef SPDK_CONFIG_SHARED 00:40:06.625 #undef SPDK_CONFIG_SMA 00:40:06.625 #define SPDK_CONFIG_TESTS 1 00:40:06.625 #undef SPDK_CONFIG_TSAN 00:40:06.625 #undef SPDK_CONFIG_UBLK 00:40:06.625 #define SPDK_CONFIG_UBSAN 1 00:40:06.625 #define SPDK_CONFIG_UNIT_TESTS 1 00:40:06.625 #undef SPDK_CONFIG_URING 00:40:06.625 #define SPDK_CONFIG_URING_PATH 00:40:06.625 #undef SPDK_CONFIG_URING_ZNS 00:40:06.625 #undef SPDK_CONFIG_USDT 00:40:06.625 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:40:06.625 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:40:06.625 #undef SPDK_CONFIG_VFIO_USER 00:40:06.625 #define SPDK_CONFIG_VFIO_USER_DIR 00:40:06.625 #define SPDK_CONFIG_VHOST 1 00:40:06.625 #define SPDK_CONFIG_VIRTIO 1 00:40:06.625 #undef SPDK_CONFIG_VTUNE 00:40:06.625 #define SPDK_CONFIG_VTUNE_DIR 00:40:06.625 #define SPDK_CONFIG_WERROR 1 00:40:06.625 #define SPDK_CONFIG_WPDK_DIR 00:40:06.625 #undef SPDK_CONFIG_XNVME 00:40:06.625 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:40:06.625 01:06:29 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:40:06.625 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:06.625 01:06:29 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:06.625 01:06:29 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:06.625 01:06:29 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:06.625 01:06:29 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:06.625 01:06:29 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:06.625 01:06:29 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:06.625 01:06:29 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:40:06.625 01:06:29 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:06.625 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:40:06.625 01:06:29 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:40:06.625 01:06:29 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:40:06.625 01:06:29 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:40:06.626 01:06:29 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 1 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:40:06.626 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 164614 ]] 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 164614 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.OKeUVq 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.OKeUVq/tests/interrupt /tmp/spdk.OKeUVq 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1248956416 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253683200 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4726784 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=9917575168 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=10682441728 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:06.627 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6263693312 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6268403712 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=103061504 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=109395968 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1253675008 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253679104 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt/output 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=95700729856 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4002050048 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:40:06.628 * Looking for test storage... 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=9917575168 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=12897034240 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:06.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set -o errtrace 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@1685 -- # true 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # xtrace_fd 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=164657 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 164657 /var/tmp/spdk.sock 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 164657 ']' 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:06.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:06.628 01:06:29 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:06.628 01:06:29 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:40:06.887 [2024-07-25 01:06:29.304845] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:06.887 [2024-07-25 01:06:29.305119] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164657 ] 00:40:06.887 [2024-07-25 01:06:29.498588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:07.145 [2024-07-25 01:06:29.699380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:07.145 [2024-07-25 01:06:29.699510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:07.145 [2024-07-25 01:06:29.699513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:07.404 [2024-07-25 01:06:29.983044] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:07.663 01:06:30 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:07.663 01:06:30 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:40:07.663 01:06:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:40:07.663 01:06:30 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:07.922 Malloc0 00:40:07.922 Malloc1 00:40:07.922 Malloc2 00:40:07.922 01:06:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:40:07.922 01:06:30 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:40:07.922 01:06:30 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:07.922 01:06:30 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:40:08.181 5000+0 records in 00:40:08.181 5000+0 records out 00:40:08.181 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0331995 s, 308 MB/s 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:40:08.181 AIO0 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 164657 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 164657 without_thd 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=164657 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:40:08.181 01:06:30 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:08.440 01:06:31 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:40:08.440 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:40:08.440 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:40:08.440 01:06:31 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:40:08.440 01:06:31 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:40:08.440 01:06:31 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:40:08.440 01:06:31 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:08.440 01:06:31 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:40:08.440 01:06:31 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:08.698 01:06:31 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:40:08.957 spdk_thread ids are 1 on reactor0. 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 164657 0 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 164657 0 idle 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164657 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164657 -w 256 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164657 root 20 0 20.1t 151812 31800 S 0.0 1.2 0:00.77 reactor_0' 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164657 root 20 0 20.1t 151812 31800 S 0.0 1.2 0:00.77 reactor_0 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 164657 1 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 164657 1 idle 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164657 00:40:08.957 01:06:31 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:08.958 01:06:31 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:08.958 01:06:31 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:08.958 01:06:31 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:08.958 01:06:31 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:08.958 01:06:31 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:08.958 01:06:31 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:08.958 01:06:31 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164657 -w 256 00:40:08.958 01:06:31 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164660 root 20 0 20.1t 151812 31800 S 0.0 1.2 0:00.00 reactor_1' 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164660 root 20 0 20.1t 151812 31800 S 0.0 1.2 0:00.00 reactor_1 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 164657 2 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 164657 2 idle 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164657 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:09.217 01:06:31 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164657 -w 256 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164661 root 20 0 20.1t 151812 31800 S 0.0 1.2 0:00.00 reactor_2' 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164661 root 20 0 20.1t 151812 31800 S 0.0 1.2 0:00.00 reactor_2 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:40:09.476 01:06:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:40:09.476 [2024-07-25 01:06:32.117221] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:09.735 01:06:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:40:09.735 [2024-07-25 01:06:32.309007] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:40:09.735 [2024-07-25 01:06:32.309760] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:09.735 01:06:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:40:09.994 [2024-07-25 01:06:32.576848] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:40:09.994 [2024-07-25 01:06:32.577468] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 164657 0 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 164657 0 busy 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164657 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164657 -w 256 00:40:09.994 01:06:32 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164657 root 20 0 20.1t 151916 31800 R 99.9 1.2 0:01.22 reactor_0' 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164657 root 20 0 20.1t 151916 31800 R 99.9 1.2 0:01.22 reactor_0 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 164657 2 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 164657 2 busy 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164657 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164657 -w 256 00:40:10.253 01:06:32 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:10.521 01:06:32 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164661 root 20 0 20.1t 151916 31800 R 99.9 1.2 0:00.35 reactor_2' 00:40:10.521 01:06:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164661 root 20 0 20.1t 151916 31800 R 99.9 1.2 0:00.35 reactor_2 00:40:10.521 01:06:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:10.521 01:06:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:10.521 01:06:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:40:10.521 01:06:32 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:40:10.521 01:06:32 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:40:10.521 01:06:32 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:40:10.521 01:06:32 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:40:10.521 01:06:32 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:10.521 01:06:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:40:10.793 [2024-07-25 01:06:33.180916] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:40:10.793 [2024-07-25 01:06:33.181275] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 164657 2 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 164657 2 idle 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164657 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164657 -w 256 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164661 root 20 0 20.1t 151980 31800 S 0.0 1.2 0:00.60 reactor_2' 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164661 root 20 0 20.1t 151980 31800 S 0.0 1.2 0:00.60 reactor_2 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:10.793 01:06:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:40:11.051 [2024-07-25 01:06:33.556727] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:40:11.051 [2024-07-25 01:06:33.557070] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:11.051 01:06:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:40:11.051 01:06:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:40:11.051 01:06:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:40:11.308 [2024-07-25 01:06:33.745393] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 164657 0 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 164657 0 idle 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164657 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164657 -w 256 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164657 root 20 0 20.1t 152068 31800 S 0.0 1.2 0:02.02 reactor_0' 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164657 root 20 0 20.1t 152068 31800 S 0.0 1.2 0:02.02 reactor_0 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:40:11.308 01:06:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 164657 00:40:11.308 01:06:33 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 164657 ']' 00:40:11.308 01:06:33 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 164657 00:40:11.308 01:06:33 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:40:11.308 01:06:33 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:11.308 01:06:33 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164657 00:40:11.565 01:06:33 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:11.565 01:06:33 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:11.565 killing process with pid 164657 00:40:11.565 01:06:33 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164657' 00:40:11.565 01:06:33 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 164657 00:40:11.565 01:06:33 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 164657 00:40:12.940 01:06:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:40:12.940 01:06:35 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:40:12.940 01:06:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:40:12.940 01:06:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.940 01:06:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:40:12.940 01:06:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=164812 00:40:12.940 01:06:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:12.940 01:06:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:40:12.940 01:06:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 164812 /var/tmp/spdk.sock 00:40:12.940 01:06:35 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 164812 ']' 00:40:12.940 01:06:35 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.940 01:06:35 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:12.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:12.940 01:06:35 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:12.940 01:06:35 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:12.940 01:06:35 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:12.940 [2024-07-25 01:06:35.545643] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:12.940 [2024-07-25 01:06:35.545868] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164812 ] 00:40:13.198 [2024-07-25 01:06:35.733040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:13.457 [2024-07-25 01:06:35.919498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:13.457 [2024-07-25 01:06:35.919685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.457 [2024-07-25 01:06:35.919689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:13.715 [2024-07-25 01:06:36.198641] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:13.973 01:06:36 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:13.973 01:06:36 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:40:13.973 01:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:40:13.973 01:06:36 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:14.232 Malloc0 00:40:14.232 Malloc1 00:40:14.232 Malloc2 00:40:14.232 01:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:40:14.232 01:06:36 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:40:14.232 01:06:36 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:14.232 01:06:36 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:40:14.232 5000+0 records in 00:40:14.232 5000+0 records out 00:40:14.232 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0347555 s, 295 MB/s 00:40:14.232 01:06:36 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:40:14.491 AIO0 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 164812 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 164812 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=164812 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:40:14.491 01:06:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:14.749 01:06:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:40:14.749 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:40:14.749 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:40:14.749 01:06:37 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:40:14.749 01:06:37 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:40:14.749 01:06:37 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:40:14.750 01:06:37 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:14.750 01:06:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:14.750 01:06:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:40:15.008 spdk_thread ids are 1 on reactor0. 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 164812 0 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 164812 0 idle 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164812 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164812 -w 256 00:40:15.008 01:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164812 root 20 0 20.1t 151912 31904 S 0.0 1.2 0:00.75 reactor_0' 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164812 root 20 0 20.1t 151912 31904 S 0.0 1.2 0:00.75 reactor_0 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 164812 1 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 164812 1 idle 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164812 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164812 -w 256 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164816 root 20 0 20.1t 151912 31904 S 0.0 1.2 0:00.00 reactor_1' 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164816 root 20 0 20.1t 151912 31904 S 0.0 1.2 0:00.00 reactor_1 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 164812 2 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 164812 2 idle 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164812 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164812 -w 256 00:40:15.267 01:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164817 root 20 0 20.1t 151912 31904 S 0.0 1.2 0:00.00 reactor_2' 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164817 root 20 0 20.1t 151912 31904 S 0.0 1.2 0:00.00 reactor_2 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:40:15.526 01:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:40:15.784 [2024-07-25 01:06:38.321026] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:40:15.784 [2024-07-25 01:06:38.321322] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:40:15.784 [2024-07-25 01:06:38.321641] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:15.784 01:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:40:16.043 [2024-07-25 01:06:38.568790] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:40:16.043 [2024-07-25 01:06:38.569367] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 164812 0 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 164812 0 busy 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164812 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164812 -w 256 00:40:16.043 01:06:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164812 root 20 0 20.1t 152024 31904 R 99.9 1.2 0:01.19 reactor_0' 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164812 root 20 0 20.1t 152024 31904 R 99.9 1.2 0:01.19 reactor_0 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 164812 2 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 164812 2 busy 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164812 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164812 -w 256 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164817 root 20 0 20.1t 152024 31904 R 93.8 1.2 0:00.36 reactor_2' 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164817 root 20 0 20.1t 152024 31904 R 93.8 1.2 0:00.36 reactor_2 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.8 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:16.302 01:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:40:16.561 [2024-07-25 01:06:39.160973] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:40:16.561 [2024-07-25 01:06:39.161409] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 164812 2 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 164812 2 idle 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164812 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164812 -w 256 00:40:16.561 01:06:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:16.820 01:06:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164817 root 20 0 20.1t 152072 31904 S 0.0 1.2 0:00.59 reactor_2' 00:40:16.820 01:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164817 root 20 0 20.1t 152072 31904 S 0.0 1.2 0:00.59 reactor_2 00:40:16.820 01:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:16.820 01:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:16.820 01:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:16.820 01:06:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:16.820 01:06:39 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:16.820 01:06:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:16.820 01:06:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:16.820 01:06:39 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:16.820 01:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:40:17.079 [2024-07-25 01:06:39.597019] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:40:17.079 [2024-07-25 01:06:39.597637] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:40:17.079 [2024-07-25 01:06:39.597782] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 164812 0 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 164812 0 idle 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=164812 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 164812 -w 256 00:40:17.079 01:06:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 164812 root 20 0 20.1t 152100 31904 S 6.7 1.2 0:02.04 reactor_0' 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 164812 root 20 0 20.1t 152100 31904 S 6.7 1.2 0:02.04 reactor_0 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:40:17.339 01:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 164812 00:40:17.339 01:06:39 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 164812 ']' 00:40:17.339 01:06:39 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 164812 00:40:17.339 01:06:39 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:40:17.339 01:06:39 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:17.339 01:06:39 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164812 00:40:17.339 01:06:39 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:17.339 01:06:39 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:17.339 01:06:39 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164812' 00:40:17.339 killing process with pid 164812 00:40:17.339 01:06:39 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 164812 00:40:17.339 01:06:39 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 164812 00:40:18.766 01:06:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:40:18.766 01:06:41 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:40:18.766 ************************************ 00:40:18.766 END TEST reactor_set_interrupt 00:40:18.766 ************************************ 00:40:18.766 00:40:18.766 real 0m12.385s 00:40:18.766 user 0m12.714s 00:40:18.766 sys 0m1.870s 00:40:18.766 01:06:41 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:18.766 01:06:41 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:18.766 01:06:41 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:40:18.766 01:06:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:18.766 01:06:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:18.766 01:06:41 -- common/autotest_common.sh@10 -- # set +x 00:40:19.028 ************************************ 00:40:19.028 START TEST reap_unregistered_poller 00:40:19.028 ************************************ 00:40:19.028 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:40:19.028 * Looking for test storage... 00:40:19.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:19.028 01:06:41 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:40:19.028 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:40:19.028 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:19.028 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:40:19.028 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:40:19.028 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:40:19.028 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:40:19.028 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:40:19.028 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:40:19.028 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:40:19.028 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:40:19.028 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:40:19.028 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:40:19.028 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:40:19.028 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:40:19.028 01:06:41 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:40:19.029 01:06:41 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:40:19.029 01:06:41 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:40:19.029 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:40:19.029 #define SPDK_CONFIG_H 00:40:19.029 #define SPDK_CONFIG_APPS 1 00:40:19.029 #define SPDK_CONFIG_ARCH native 00:40:19.029 #define SPDK_CONFIG_ASAN 1 00:40:19.029 #undef SPDK_CONFIG_AVAHI 00:40:19.029 #undef SPDK_CONFIG_CET 00:40:19.029 #define SPDK_CONFIG_COVERAGE 1 00:40:19.029 #define SPDK_CONFIG_CROSS_PREFIX 00:40:19.029 #undef SPDK_CONFIG_CRYPTO 00:40:19.029 #undef SPDK_CONFIG_CRYPTO_MLX5 00:40:19.029 #undef SPDK_CONFIG_CUSTOMOCF 00:40:19.029 #undef SPDK_CONFIG_DAOS 00:40:19.029 #define SPDK_CONFIG_DAOS_DIR 00:40:19.029 #define SPDK_CONFIG_DEBUG 1 00:40:19.029 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:40:19.029 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:40:19.029 #define SPDK_CONFIG_DPDK_INC_DIR 00:40:19.029 #define SPDK_CONFIG_DPDK_LIB_DIR 00:40:19.029 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:40:19.029 #undef SPDK_CONFIG_DPDK_UADK 00:40:19.029 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:40:19.029 #define SPDK_CONFIG_EXAMPLES 1 00:40:19.029 #undef SPDK_CONFIG_FC 00:40:19.029 #define SPDK_CONFIG_FC_PATH 00:40:19.029 #define SPDK_CONFIG_FIO_PLUGIN 1 00:40:19.029 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:40:19.029 #undef SPDK_CONFIG_FUSE 00:40:19.029 #undef SPDK_CONFIG_FUZZER 00:40:19.029 #define SPDK_CONFIG_FUZZER_LIB 00:40:19.029 #undef SPDK_CONFIG_GOLANG 00:40:19.029 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:40:19.029 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:40:19.029 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:40:19.029 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:40:19.029 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:40:19.029 #undef SPDK_CONFIG_HAVE_LIBBSD 00:40:19.029 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:40:19.029 #define SPDK_CONFIG_IDXD 1 00:40:19.029 #undef SPDK_CONFIG_IDXD_KERNEL 00:40:19.029 #undef SPDK_CONFIG_IPSEC_MB 00:40:19.029 #define SPDK_CONFIG_IPSEC_MB_DIR 00:40:19.029 #define SPDK_CONFIG_ISAL 1 00:40:19.029 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:40:19.029 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:40:19.029 #define SPDK_CONFIG_LIBDIR 00:40:19.029 #undef SPDK_CONFIG_LTO 00:40:19.029 #define SPDK_CONFIG_MAX_LCORES 128 00:40:19.029 #define SPDK_CONFIG_NVME_CUSE 1 00:40:19.029 #undef SPDK_CONFIG_OCF 00:40:19.029 #define SPDK_CONFIG_OCF_PATH 00:40:19.029 #define SPDK_CONFIG_OPENSSL_PATH 00:40:19.029 #undef SPDK_CONFIG_PGO_CAPTURE 00:40:19.029 #define SPDK_CONFIG_PGO_DIR 00:40:19.029 #undef SPDK_CONFIG_PGO_USE 00:40:19.029 #define SPDK_CONFIG_PREFIX /usr/local 00:40:19.029 #define SPDK_CONFIG_RAID5F 1 00:40:19.029 #undef SPDK_CONFIG_RBD 00:40:19.029 #define SPDK_CONFIG_RDMA 1 00:40:19.029 #define SPDK_CONFIG_RDMA_PROV verbs 00:40:19.029 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:40:19.029 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:40:19.029 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:40:19.029 #undef SPDK_CONFIG_SHARED 00:40:19.029 #undef SPDK_CONFIG_SMA 00:40:19.029 #define SPDK_CONFIG_TESTS 1 00:40:19.029 #undef SPDK_CONFIG_TSAN 00:40:19.029 #undef SPDK_CONFIG_UBLK 00:40:19.029 #define SPDK_CONFIG_UBSAN 1 00:40:19.029 #define SPDK_CONFIG_UNIT_TESTS 1 00:40:19.029 #undef SPDK_CONFIG_URING 00:40:19.029 #define SPDK_CONFIG_URING_PATH 00:40:19.029 #undef SPDK_CONFIG_URING_ZNS 00:40:19.029 #undef SPDK_CONFIG_USDT 00:40:19.029 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:40:19.029 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:40:19.029 #undef SPDK_CONFIG_VFIO_USER 00:40:19.029 #define SPDK_CONFIG_VFIO_USER_DIR 00:40:19.029 #define SPDK_CONFIG_VHOST 1 00:40:19.029 #define SPDK_CONFIG_VIRTIO 1 00:40:19.029 #undef SPDK_CONFIG_VTUNE 00:40:19.029 #define SPDK_CONFIG_VTUNE_DIR 00:40:19.029 #define SPDK_CONFIG_WERROR 1 00:40:19.029 #define SPDK_CONFIG_WPDK_DIR 00:40:19.029 #undef SPDK_CONFIG_XNVME 00:40:19.029 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:40:19.029 01:06:41 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:40:19.029 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:19.029 01:06:41 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:19.029 01:06:41 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:19.029 01:06:41 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:19.029 01:06:41 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:19.029 01:06:41 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:19.029 01:06:41 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:19.029 01:06:41 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:40:19.029 01:06:41 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:19.029 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:40:19.029 01:06:41 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:40:19.029 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:40:19.029 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:40:19.029 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 1 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:19.030 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 164991 ]] 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 164991 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.w1xZME 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:40:19.031 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.w1xZME/tests/interrupt /tmp/spdk.w1xZME 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1248956416 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253683200 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4726784 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=9917530112 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=10682486784 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6263693312 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6268403712 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=103061504 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=109395968 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1253675008 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253679104 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt/output 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:40:19.291 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=95700537344 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4002242560 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:40:19.292 * Looking for test storage... 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=9917530112 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=12897079296 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:19.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set -o errtrace 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@1685 -- # true 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # xtrace_fd 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=165040 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:40:19.292 01:06:41 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 165040 /var/tmp/spdk.sock 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 165040 ']' 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:19.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:19.292 01:06:41 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:40:19.292 [2024-07-25 01:06:41.780265] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:19.292 [2024-07-25 01:06:41.780679] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165040 ] 00:40:19.552 [2024-07-25 01:06:41.976065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:19.552 [2024-07-25 01:06:42.178368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:19.552 [2024-07-25 01:06:42.178513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.552 [2024-07-25 01:06:42.178513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:19.811 [2024-07-25 01:06:42.460810] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:20.070 01:06:42 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:20.070 01:06:42 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:40:20.070 01:06:42 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:40:20.070 01:06:42 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:20.070 01:06:42 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:40:20.070 01:06:42 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:40:20.070 01:06:42 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:20.329 01:06:42 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:40:20.329 "name": "app_thread", 00:40:20.329 "id": 1, 00:40:20.329 "active_pollers": [], 00:40:20.329 "timed_pollers": [ 00:40:20.329 { 00:40:20.329 "name": "rpc_subsystem_poll_servers", 00:40:20.329 "id": 1, 00:40:20.329 "state": "waiting", 00:40:20.329 "run_count": 0, 00:40:20.329 "busy_count": 0, 00:40:20.329 "period_ticks": 8400000 00:40:20.329 } 00:40:20.329 ], 00:40:20.329 "paused_pollers": [] 00:40:20.329 }' 00:40:20.329 01:06:42 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:40:20.329 01:06:42 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:40:20.329 01:06:42 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:40:20.329 01:06:42 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:40:20.329 01:06:42 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:40:20.329 01:06:42 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:40:20.329 01:06:42 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:40:20.329 01:06:42 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:20.329 01:06:42 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:40:20.329 5000+0 records in 00:40:20.329 5000+0 records out 00:40:20.329 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0349421 s, 293 MB/s 00:40:20.329 01:06:42 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:40:20.587 AIO0 00:40:20.587 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:20.849 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:40:20.849 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:40:20.849 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:40:20.849 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:20.849 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:40:21.109 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.109 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:40:21.109 "name": "app_thread", 00:40:21.109 "id": 1, 00:40:21.109 "active_pollers": [], 00:40:21.109 "timed_pollers": [ 00:40:21.109 { 00:40:21.109 "name": "rpc_subsystem_poll_servers", 00:40:21.109 "id": 1, 00:40:21.109 "state": "waiting", 00:40:21.109 "run_count": 0, 00:40:21.109 "busy_count": 0, 00:40:21.109 "period_ticks": 8400000 00:40:21.109 } 00:40:21.109 ], 00:40:21.109 "paused_pollers": [] 00:40:21.109 }' 00:40:21.109 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:40:21.109 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:40:21.109 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:40:21.109 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:40:21.109 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:40:21.109 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:40:21.109 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:40:21.109 01:06:43 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 165040 00:40:21.109 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 165040 ']' 00:40:21.109 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 165040 00:40:21.109 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:40:21.109 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:21.109 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 165040 00:40:21.109 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:21.109 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:21.109 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 165040' 00:40:21.109 killing process with pid 165040 00:40:21.109 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 165040 00:40:21.109 01:06:43 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 165040 00:40:22.487 01:06:44 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:40:22.487 01:06:45 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:40:22.487 ************************************ 00:40:22.487 END TEST reap_unregistered_poller 00:40:22.487 ************************************ 00:40:22.487 00:40:22.487 real 0m3.593s 00:40:22.487 user 0m2.927s 00:40:22.487 sys 0m0.666s 00:40:22.488 01:06:45 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:22.488 01:06:45 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:40:22.488 01:06:45 -- spdk/autotest.sh@198 -- # uname -s 00:40:22.488 01:06:45 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:40:22.488 01:06:45 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:40:22.488 01:06:45 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:40:22.488 01:06:45 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:40:22.488 01:06:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:22.488 01:06:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:22.488 01:06:45 -- common/autotest_common.sh@10 -- # set +x 00:40:22.488 ************************************ 00:40:22.488 START TEST spdk_dd 00:40:22.488 ************************************ 00:40:22.488 01:06:45 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:40:22.746 * Looking for test storage... 00:40:22.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:22.747 01:06:45 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:22.747 01:06:45 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:22.747 01:06:45 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:22.747 01:06:45 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:22.747 01:06:45 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:22.747 01:06:45 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:22.747 01:06:45 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:22.747 01:06:45 spdk_dd -- paths/export.sh@5 -- # export PATH 00:40:22.747 01:06:45 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:22.747 01:06:45 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:23.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:23.005 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:40:23.941 01:06:46 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:40:23.942 01:06:46 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@230 -- # local class 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@232 -- # local progif 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@233 -- # class=01 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@15 -- # local i 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@24 -- # return 0 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:40:23.942 01:06:46 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:40:23.942 01:06:46 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:40:23.942 01:06:46 spdk_dd -- dd/common.sh@139 -- # local lib 00:40:23.942 01:06:46 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:40:23.942 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:23.942 01:06:46 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:23.942 01:06:46 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:40:24.202 01:06:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:24.202 01:06:46 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:40:24.202 01:06:46 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:40:24.202 01:06:46 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:24.202 01:06:46 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:24.202 01:06:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:40:24.202 ************************************ 00:40:24.202 START TEST spdk_dd_basic_rw 00:40:24.202 ************************************ 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:40:24.202 * Looking for test storage... 00:40:24.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:40:24.202 01:06:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:40:24.771 01:06:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 107 Data Units Written: 7 Host Read Commands: 2306 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:40:24.771 01:06:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 107 Data Units Written: 7 Host Read Commands: 2306 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:24.772 ************************************ 00:40:24.772 START TEST dd_bs_lt_native_bs 00:40:24.772 ************************************ 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:24.772 01:06:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:24.772 { 00:40:24.772 "subsystems": [ 00:40:24.772 { 00:40:24.772 "subsystem": "bdev", 00:40:24.772 "config": [ 00:40:24.772 { 00:40:24.772 "params": { 00:40:24.772 "trtype": "pcie", 00:40:24.772 "traddr": "0000:00:10.0", 00:40:24.772 "name": "Nvme0" 00:40:24.772 }, 00:40:24.772 "method": "bdev_nvme_attach_controller" 00:40:24.772 }, 00:40:24.772 { 00:40:24.772 "method": "bdev_wait_for_examine" 00:40:24.772 } 00:40:24.772 ] 00:40:24.772 } 00:40:24.772 ] 00:40:24.772 } 00:40:24.772 [2024-07-25 01:06:47.277541] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:24.772 [2024-07-25 01:06:47.277744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165359 ] 00:40:25.031 [2024-07-25 01:06:47.460819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:25.289 [2024-07-25 01:06:47.725980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:25.548 [2024-07-25 01:06:48.087896] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:40:25.548 [2024-07-25 01:06:48.088001] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:26.496 [2024-07-25 01:06:48.919083] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:26.754 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:40:26.754 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:26.754 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:40:26.754 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:40:26.754 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:40:26.754 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:26.754 00:40:26.754 real 0m2.196s 00:40:26.754 user 0m1.885s 00:40:26.754 sys 0m0.268s 00:40:26.754 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:26.754 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:40:26.754 ************************************ 00:40:26.754 END TEST dd_bs_lt_native_bs 00:40:26.754 ************************************ 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:27.013 ************************************ 00:40:27.013 START TEST dd_rw 00:40:27.013 ************************************ 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:27.013 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:27.579 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:40:27.579 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:27.579 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:27.579 01:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:27.579 { 00:40:27.579 "subsystems": [ 00:40:27.579 { 00:40:27.579 "subsystem": "bdev", 00:40:27.579 "config": [ 00:40:27.579 { 00:40:27.579 "params": { 00:40:27.579 "trtype": "pcie", 00:40:27.579 "traddr": "0000:00:10.0", 00:40:27.579 "name": "Nvme0" 00:40:27.579 }, 00:40:27.579 "method": "bdev_nvme_attach_controller" 00:40:27.579 }, 00:40:27.579 { 00:40:27.579 "method": "bdev_wait_for_examine" 00:40:27.579 } 00:40:27.579 ] 00:40:27.579 } 00:40:27.579 ] 00:40:27.579 } 00:40:27.579 [2024-07-25 01:06:50.055384] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:27.579 [2024-07-25 01:06:50.055607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165410 ] 00:40:27.838 [2024-07-25 01:06:50.232692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.838 [2024-07-25 01:06:50.423225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:29.773  Copying: 60/60 [kB] (average 19 MBps) 00:40:29.773 00:40:29.773 01:06:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:40:29.773 01:06:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:29.773 01:06:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:29.773 01:06:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:29.773 { 00:40:29.773 "subsystems": [ 00:40:29.773 { 00:40:29.773 "subsystem": "bdev", 00:40:29.773 "config": [ 00:40:29.773 { 00:40:29.773 "params": { 00:40:29.773 "trtype": "pcie", 00:40:29.773 "traddr": "0000:00:10.0", 00:40:29.773 "name": "Nvme0" 00:40:29.773 }, 00:40:29.773 "method": "bdev_nvme_attach_controller" 00:40:29.773 }, 00:40:29.773 { 00:40:29.773 "method": "bdev_wait_for_examine" 00:40:29.773 } 00:40:29.773 ] 00:40:29.773 } 00:40:29.773 ] 00:40:29.773 } 00:40:29.773 [2024-07-25 01:06:52.098921] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:29.773 [2024-07-25 01:06:52.099140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165448 ] 00:40:29.773 [2024-07-25 01:06:52.279769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.030 [2024-07-25 01:06:52.469660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:31.662  Copying: 60/60 [kB] (average 19 MBps) 00:40:31.662 00:40:31.662 01:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:31.662 01:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:40:31.662 01:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:31.662 01:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:31.662 01:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:40:31.662 01:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:31.662 01:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:31.662 01:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:31.662 01:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:31.662 01:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:31.662 01:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:31.662 { 00:40:31.662 "subsystems": [ 00:40:31.662 { 00:40:31.662 "subsystem": "bdev", 00:40:31.662 "config": [ 00:40:31.662 { 00:40:31.662 "params": { 00:40:31.662 "trtype": "pcie", 00:40:31.662 "traddr": "0000:00:10.0", 00:40:31.662 "name": "Nvme0" 00:40:31.662 }, 00:40:31.662 "method": "bdev_nvme_attach_controller" 00:40:31.662 }, 00:40:31.662 { 00:40:31.662 "method": "bdev_wait_for_examine" 00:40:31.662 } 00:40:31.662 ] 00:40:31.662 } 00:40:31.662 ] 00:40:31.662 } 00:40:31.662 [2024-07-25 01:06:54.236262] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:31.662 [2024-07-25 01:06:54.237035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165476 ] 00:40:31.929 [2024-07-25 01:06:54.415497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:32.192 [2024-07-25 01:06:54.604567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.822  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:33.822 00:40:33.822 01:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:33.822 01:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:40:33.822 01:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:40:33.822 01:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:40:33.822 01:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:40:33.822 01:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:33.822 01:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:34.080 01:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:40:34.080 01:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:34.080 01:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:34.080 01:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:34.349 { 00:40:34.349 "subsystems": [ 00:40:34.349 { 00:40:34.349 "subsystem": "bdev", 00:40:34.349 "config": [ 00:40:34.349 { 00:40:34.349 "params": { 00:40:34.349 "trtype": "pcie", 00:40:34.349 "traddr": "0000:00:10.0", 00:40:34.349 "name": "Nvme0" 00:40:34.349 }, 00:40:34.349 "method": "bdev_nvme_attach_controller" 00:40:34.349 }, 00:40:34.349 { 00:40:34.349 "method": "bdev_wait_for_examine" 00:40:34.349 } 00:40:34.349 ] 00:40:34.349 } 00:40:34.349 ] 00:40:34.349 } 00:40:34.349 [2024-07-25 01:06:56.796032] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:34.349 [2024-07-25 01:06:56.796248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165515 ] 00:40:34.349 [2024-07-25 01:06:56.970063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.621 [2024-07-25 01:06:57.164229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.560  Copying: 60/60 [kB] (average 58 MBps) 00:40:36.560 00:40:36.560 01:06:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:40:36.560 01:06:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:36.560 01:06:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:36.560 01:06:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:36.560 { 00:40:36.560 "subsystems": [ 00:40:36.560 { 00:40:36.560 "subsystem": "bdev", 00:40:36.560 "config": [ 00:40:36.560 { 00:40:36.560 "params": { 00:40:36.560 "trtype": "pcie", 00:40:36.560 "traddr": "0000:00:10.0", 00:40:36.560 "name": "Nvme0" 00:40:36.560 }, 00:40:36.560 "method": "bdev_nvme_attach_controller" 00:40:36.560 }, 00:40:36.560 { 00:40:36.560 "method": "bdev_wait_for_examine" 00:40:36.560 } 00:40:36.560 ] 00:40:36.560 } 00:40:36.560 ] 00:40:36.560 } 00:40:36.560 [2024-07-25 01:06:58.920099] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:36.560 [2024-07-25 01:06:58.920316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165547 ] 00:40:36.560 [2024-07-25 01:06:59.099999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.818 [2024-07-25 01:06:59.285616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:38.446  Copying: 60/60 [kB] (average 58 MBps) 00:40:38.446 00:40:38.446 01:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:38.446 01:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:40:38.446 01:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:38.446 01:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:38.446 01:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:40:38.446 01:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:38.446 01:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:38.446 01:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:38.446 01:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:38.446 01:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:38.446 01:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:38.446 { 00:40:38.446 "subsystems": [ 00:40:38.446 { 00:40:38.446 "subsystem": "bdev", 00:40:38.446 "config": [ 00:40:38.446 { 00:40:38.446 "params": { 00:40:38.446 "trtype": "pcie", 00:40:38.446 "traddr": "0000:00:10.0", 00:40:38.446 "name": "Nvme0" 00:40:38.446 }, 00:40:38.446 "method": "bdev_nvme_attach_controller" 00:40:38.446 }, 00:40:38.446 { 00:40:38.446 "method": "bdev_wait_for_examine" 00:40:38.446 } 00:40:38.446 ] 00:40:38.446 } 00:40:38.446 ] 00:40:38.446 } 00:40:38.446 [2024-07-25 01:07:00.964394] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:38.446 [2024-07-25 01:07:00.964614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165575 ] 00:40:38.703 [2024-07-25 01:07:01.136769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:38.703 [2024-07-25 01:07:01.334161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.642  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:40.642 00:40:40.642 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:40:40.642 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:40.642 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:40:40.642 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:40:40.642 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:40:40.642 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:40:40.642 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:40.642 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:41.208 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:40:41.208 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:41.208 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:41.208 01:07:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:41.208 { 00:40:41.208 "subsystems": [ 00:40:41.208 { 00:40:41.208 "subsystem": "bdev", 00:40:41.208 "config": [ 00:40:41.208 { 00:40:41.208 "params": { 00:40:41.208 "trtype": "pcie", 00:40:41.208 "traddr": "0000:00:10.0", 00:40:41.208 "name": "Nvme0" 00:40:41.208 }, 00:40:41.208 "method": "bdev_nvme_attach_controller" 00:40:41.208 }, 00:40:41.208 { 00:40:41.208 "method": "bdev_wait_for_examine" 00:40:41.208 } 00:40:41.208 ] 00:40:41.208 } 00:40:41.208 ] 00:40:41.208 } 00:40:41.208 [2024-07-25 01:07:03.647913] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:41.208 [2024-07-25 01:07:03.648141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165613 ] 00:40:41.208 [2024-07-25 01:07:03.824991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:41.466 [2024-07-25 01:07:04.014141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:42.973  Copying: 56/56 [kB] (average 54 MBps) 00:40:42.973 00:40:42.973 01:07:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:40:42.973 01:07:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:42.973 01:07:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:42.973 01:07:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:43.242 { 00:40:43.242 "subsystems": [ 00:40:43.242 { 00:40:43.242 "subsystem": "bdev", 00:40:43.242 "config": [ 00:40:43.242 { 00:40:43.242 "params": { 00:40:43.242 "trtype": "pcie", 00:40:43.242 "traddr": "0000:00:10.0", 00:40:43.242 "name": "Nvme0" 00:40:43.242 }, 00:40:43.242 "method": "bdev_nvme_attach_controller" 00:40:43.242 }, 00:40:43.242 { 00:40:43.242 "method": "bdev_wait_for_examine" 00:40:43.242 } 00:40:43.242 ] 00:40:43.242 } 00:40:43.242 ] 00:40:43.242 } 00:40:43.242 [2024-07-25 01:07:05.692094] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:43.242 [2024-07-25 01:07:05.692310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165641 ] 00:40:43.242 [2024-07-25 01:07:05.872304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.500 [2024-07-25 01:07:06.061909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.444  Copying: 56/56 [kB] (average 27 MBps) 00:40:45.444 00:40:45.444 01:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:45.444 01:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:40:45.444 01:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:45.444 01:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:45.444 01:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:40:45.444 01:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:45.444 01:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:45.444 01:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:45.444 01:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:45.444 01:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:45.444 01:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:45.444 [2024-07-25 01:07:07.818820] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:45.444 [2024-07-25 01:07:07.818967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165676 ] 00:40:45.444 { 00:40:45.444 "subsystems": [ 00:40:45.444 { 00:40:45.444 "subsystem": "bdev", 00:40:45.444 "config": [ 00:40:45.444 { 00:40:45.444 "params": { 00:40:45.444 "trtype": "pcie", 00:40:45.444 "traddr": "0000:00:10.0", 00:40:45.444 "name": "Nvme0" 00:40:45.444 }, 00:40:45.444 "method": "bdev_nvme_attach_controller" 00:40:45.444 }, 00:40:45.444 { 00:40:45.444 "method": "bdev_wait_for_examine" 00:40:45.444 } 00:40:45.444 ] 00:40:45.444 } 00:40:45.444 ] 00:40:45.444 } 00:40:45.444 [2024-07-25 01:07:07.979029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:45.703 [2024-07-25 01:07:08.166553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:47.339  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:47.339 00:40:47.339 01:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:47.339 01:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:40:47.339 01:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:40:47.339 01:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:40:47.339 01:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:40:47.339 01:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:47.339 01:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:47.598 01:07:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:40:47.598 01:07:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:47.598 01:07:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:47.598 01:07:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:47.857 { 00:40:47.857 "subsystems": [ 00:40:47.857 { 00:40:47.857 "subsystem": "bdev", 00:40:47.857 "config": [ 00:40:47.857 { 00:40:47.857 "params": { 00:40:47.857 "trtype": "pcie", 00:40:47.857 "traddr": "0000:00:10.0", 00:40:47.857 "name": "Nvme0" 00:40:47.857 }, 00:40:47.857 "method": "bdev_nvme_attach_controller" 00:40:47.857 }, 00:40:47.857 { 00:40:47.857 "method": "bdev_wait_for_examine" 00:40:47.857 } 00:40:47.857 ] 00:40:47.857 } 00:40:47.857 ] 00:40:47.857 } 00:40:47.857 [2024-07-25 01:07:10.312479] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:47.857 [2024-07-25 01:07:10.312710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165708 ] 00:40:47.857 [2024-07-25 01:07:10.491703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.115 [2024-07-25 01:07:10.687654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.059  Copying: 56/56 [kB] (average 54 MBps) 00:40:50.059 00:40:50.059 01:07:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:40:50.059 01:07:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:50.059 01:07:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:50.059 01:07:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:50.059 { 00:40:50.059 "subsystems": [ 00:40:50.059 { 00:40:50.059 "subsystem": "bdev", 00:40:50.059 "config": [ 00:40:50.059 { 00:40:50.059 "params": { 00:40:50.059 "trtype": "pcie", 00:40:50.059 "traddr": "0000:00:10.0", 00:40:50.059 "name": "Nvme0" 00:40:50.059 }, 00:40:50.059 "method": "bdev_nvme_attach_controller" 00:40:50.059 }, 00:40:50.059 { 00:40:50.059 "method": "bdev_wait_for_examine" 00:40:50.059 } 00:40:50.059 ] 00:40:50.059 } 00:40:50.059 ] 00:40:50.059 } 00:40:50.059 [2024-07-25 01:07:12.478653] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:50.059 [2024-07-25 01:07:12.478874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165742 ] 00:40:50.059 [2024-07-25 01:07:12.659105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.318 [2024-07-25 01:07:12.844233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:51.971  Copying: 56/56 [kB] (average 54 MBps) 00:40:51.971 00:40:51.971 01:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:51.971 01:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:40:51.971 01:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:51.971 01:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:51.971 01:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:40:51.971 01:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:51.971 01:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:51.971 01:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:51.971 01:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:51.971 01:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:51.971 01:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:51.971 { 00:40:51.971 "subsystems": [ 00:40:51.971 { 00:40:51.971 "subsystem": "bdev", 00:40:51.971 "config": [ 00:40:51.971 { 00:40:51.971 "params": { 00:40:51.971 "trtype": "pcie", 00:40:51.971 "traddr": "0000:00:10.0", 00:40:51.971 "name": "Nvme0" 00:40:51.971 }, 00:40:51.971 "method": "bdev_nvme_attach_controller" 00:40:51.971 }, 00:40:51.971 { 00:40:51.971 "method": "bdev_wait_for_examine" 00:40:51.971 } 00:40:51.971 ] 00:40:51.971 } 00:40:51.971 ] 00:40:51.971 } 00:40:51.971 [2024-07-25 01:07:14.502141] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:51.972 [2024-07-25 01:07:14.503084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165770 ] 00:40:52.230 [2024-07-25 01:07:14.675799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:52.230 [2024-07-25 01:07:14.868521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.172  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:54.172 00:40:54.172 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:40:54.172 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:54.172 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:40:54.172 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:40:54.172 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:40:54.172 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:40:54.172 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:54.172 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:54.431 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:40:54.431 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:54.431 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:54.431 01:07:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:54.431 { 00:40:54.431 "subsystems": [ 00:40:54.431 { 00:40:54.431 "subsystem": "bdev", 00:40:54.431 "config": [ 00:40:54.431 { 00:40:54.431 "params": { 00:40:54.431 "trtype": "pcie", 00:40:54.431 "traddr": "0000:00:10.0", 00:40:54.431 "name": "Nvme0" 00:40:54.431 }, 00:40:54.431 "method": "bdev_nvme_attach_controller" 00:40:54.431 }, 00:40:54.431 { 00:40:54.431 "method": "bdev_wait_for_examine" 00:40:54.431 } 00:40:54.431 ] 00:40:54.431 } 00:40:54.431 ] 00:40:54.431 } 00:40:54.431 [2024-07-25 01:07:17.071134] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:54.431 [2024-07-25 01:07:17.071349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165809 ] 00:40:54.690 [2024-07-25 01:07:17.250922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:54.949 [2024-07-25 01:07:17.431295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:56.585  Copying: 48/48 [kB] (average 46 MBps) 00:40:56.585 00:40:56.585 01:07:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:56.585 01:07:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:40:56.585 01:07:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:56.585 01:07:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:56.585 { 00:40:56.585 "subsystems": [ 00:40:56.585 { 00:40:56.585 "subsystem": "bdev", 00:40:56.585 "config": [ 00:40:56.585 { 00:40:56.585 "params": { 00:40:56.585 "trtype": "pcie", 00:40:56.585 "traddr": "0000:00:10.0", 00:40:56.585 "name": "Nvme0" 00:40:56.585 }, 00:40:56.585 "method": "bdev_nvme_attach_controller" 00:40:56.585 }, 00:40:56.585 { 00:40:56.585 "method": "bdev_wait_for_examine" 00:40:56.585 } 00:40:56.585 ] 00:40:56.585 } 00:40:56.585 ] 00:40:56.585 } 00:40:56.585 [2024-07-25 01:07:19.075433] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:56.585 [2024-07-25 01:07:19.075633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165840 ] 00:40:56.844 [2024-07-25 01:07:19.255259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:56.844 [2024-07-25 01:07:19.432385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.787  Copying: 48/48 [kB] (average 46 MBps) 00:40:58.787 00:40:58.787 01:07:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:58.787 01:07:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:40:58.787 01:07:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:58.787 01:07:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:58.787 01:07:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:40:58.787 01:07:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:58.787 01:07:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:58.787 01:07:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:58.787 01:07:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:58.787 01:07:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:58.787 01:07:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:58.787 { 00:40:58.787 "subsystems": [ 00:40:58.787 { 00:40:58.787 "subsystem": "bdev", 00:40:58.787 "config": [ 00:40:58.787 { 00:40:58.787 "params": { 00:40:58.787 "trtype": "pcie", 00:40:58.787 "traddr": "0000:00:10.0", 00:40:58.787 "name": "Nvme0" 00:40:58.787 }, 00:40:58.787 "method": "bdev_nvme_attach_controller" 00:40:58.787 }, 00:40:58.787 { 00:40:58.787 "method": "bdev_wait_for_examine" 00:40:58.787 } 00:40:58.787 ] 00:40:58.787 } 00:40:58.787 ] 00:40:58.787 } 00:40:58.787 [2024-07-25 01:07:21.187212] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:40:58.787 [2024-07-25 01:07:21.187412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165869 ] 00:40:58.787 [2024-07-25 01:07:21.367017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.046 [2024-07-25 01:07:21.560137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:00.681  Copying: 1024/1024 [kB] (average 1000 MBps) 00:41:00.681 00:41:00.681 01:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:41:00.681 01:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:41:00.681 01:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:41:00.681 01:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:41:00.681 01:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:41:00.681 01:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:41:00.681 01:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:41:00.938 01:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:41:00.938 01:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:41:00.938 01:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:41:00.938 01:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:41:01.196 { 00:41:01.196 "subsystems": [ 00:41:01.196 { 00:41:01.196 "subsystem": "bdev", 00:41:01.196 "config": [ 00:41:01.196 { 00:41:01.196 "params": { 00:41:01.196 "trtype": "pcie", 00:41:01.196 "traddr": "0000:00:10.0", 00:41:01.196 "name": "Nvme0" 00:41:01.196 }, 00:41:01.196 "method": "bdev_nvme_attach_controller" 00:41:01.196 }, 00:41:01.196 { 00:41:01.196 "method": "bdev_wait_for_examine" 00:41:01.196 } 00:41:01.196 ] 00:41:01.196 } 00:41:01.196 ] 00:41:01.196 } 00:41:01.196 [2024-07-25 01:07:23.623363] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:01.196 [2024-07-25 01:07:23.623583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165903 ] 00:41:01.196 [2024-07-25 01:07:23.802848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:01.454 [2024-07-25 01:07:23.993331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:03.087  Copying: 48/48 [kB] (average 46 MBps) 00:41:03.087 00:41:03.087 01:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:41:03.087 01:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:41:03.087 01:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:41:03.087 01:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:41:03.087 { 00:41:03.087 "subsystems": [ 00:41:03.087 { 00:41:03.087 "subsystem": "bdev", 00:41:03.087 "config": [ 00:41:03.087 { 00:41:03.087 "params": { 00:41:03.087 "trtype": "pcie", 00:41:03.087 "traddr": "0000:00:10.0", 00:41:03.088 "name": "Nvme0" 00:41:03.088 }, 00:41:03.088 "method": "bdev_nvme_attach_controller" 00:41:03.088 }, 00:41:03.088 { 00:41:03.088 "method": "bdev_wait_for_examine" 00:41:03.088 } 00:41:03.088 ] 00:41:03.088 } 00:41:03.088 ] 00:41:03.088 } 00:41:03.346 [2024-07-25 01:07:25.747568] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:03.346 [2024-07-25 01:07:25.747788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165930 ] 00:41:03.346 [2024-07-25 01:07:25.926874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:03.604 [2024-07-25 01:07:26.108805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:05.240  Copying: 48/48 [kB] (average 46 MBps) 00:41:05.240 00:41:05.240 01:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:05.240 01:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:41:05.240 01:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:41:05.240 01:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:41:05.240 01:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:41:05.240 01:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:41:05.240 01:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:41:05.240 01:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:41:05.240 01:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:41:05.240 01:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:41:05.240 01:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:41:05.240 [2024-07-25 01:07:27.749376] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:05.240 [2024-07-25 01:07:27.749523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165963 ] 00:41:05.240 { 00:41:05.240 "subsystems": [ 00:41:05.240 { 00:41:05.240 "subsystem": "bdev", 00:41:05.240 "config": [ 00:41:05.240 { 00:41:05.240 "params": { 00:41:05.240 "trtype": "pcie", 00:41:05.240 "traddr": "0000:00:10.0", 00:41:05.240 "name": "Nvme0" 00:41:05.240 }, 00:41:05.240 "method": "bdev_nvme_attach_controller" 00:41:05.240 }, 00:41:05.240 { 00:41:05.240 "method": "bdev_wait_for_examine" 00:41:05.240 } 00:41:05.240 ] 00:41:05.240 } 00:41:05.240 ] 00:41:05.240 } 00:41:05.499 [2024-07-25 01:07:27.908161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.499 [2024-07-25 01:07:28.088781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.443  Copying: 1024/1024 [kB] (average 1000 MBps) 00:41:07.443 00:41:07.443 00:41:07.443 real 0m40.340s 00:41:07.443 user 0m34.000s 00:41:07.443 sys 0m4.956s 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:41:07.443 ************************************ 00:41:07.443 END TEST dd_rw 00:41:07.443 ************************************ 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:41:07.443 ************************************ 00:41:07.443 START TEST dd_rw_offset 00:41:07.443 ************************************ 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:41:07.443 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:41:07.444 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=ime8k59k3ry6l3q6zt48fmd7sacvzkf6xflp0b2e45o06ps8rfpnj7462f57e3pp14wx2wu6go0i45gmc2dylucra37m08p96bgc99qryq4t8qbx7qed7olrvs56gbhwp8l7r6jjczxxcob0si7y7qeb7fpr9vij3nyzmifh66nxtuhr93m69l7wod19h08zlifawn0tb5izxr48ezqdd159f9mesht9r0jama7ok7j8pwp06nnv39r77xrq611zpixjnkd05io1ftaat7x8401dev0gdchsde3rtm4onbo6ehaa0l5cjhuruwumwj0hbo1jjiq074mtp3cm8qcjmbqpf4365ifs2lcldt4tj5j7xivdkjpckppq3nhwd2na0g1mml2l4q1tkkfdgyt7os6iwq6aw3t9c96df3pz2s0qhx0l2vq30oq2dyo6sgk280zwrvbvud1r8qzgmuhnvmtb0emkx4u0d6uadc1lgvzdul2a5q37bcftsq782gaw596lah45wvkze70x0w6zsyls52nd5fzpa382zsglx3qxx3hsd6g6efthem7tjqsy1n5xdl0kw2sosemypjh1fhmncw99qy6hkv3ywd8z3jv9kbzq9wifvdyqp1ycoyt14dtetgioo2xez3safkvh5blt77a9nsj2tr3n4hiuqvovc3uh82ffvuhr1ehuuyynotjrrfm1uuqoqqn58jcdtifxymdcmm6g3tbldvgedgmqi7ckugnowvjptyf89onef1dwbvugqg37b43uvec6im0liftpygju6mardotutfbbtk65f74v0vx76cwyk0iwaebte00q2ufm350hf7bvzygwdn39uwifqqlxdwgcqq9bumlh1tehmmomev1p8b5ts62ie9cj2edow990ks8gmre48c45c1rx56borzazpdz3t7i9x51n6dx29sowx2x8rg86za5x44lvb07njcd1s8cyg5kh2hdvrq8ql0hzv10w3dmek7grteb0rdfgihcm0vjhtudm87l2wf8yxjxcbgn5uq0bablwb2ad5y0y8kwg0yhsy8k1fchajyojeufvags2pdxbn8ess8xvgn4xak1nq126ghsote5kfql0b48ixr30ryftvria6dmuj2r42ozv6op4ftlbae6dgp66vahhsqjgr017bm7bpm09avx22swqc0e2oytt7c029gh8400fxxfpu070hobp69tudr7s20cr0x0m68qi7ppp2qihkz4e6p1qtvkdzgkynshn35zdp3wm000edlrazfz5hblktd87hy6se123lpbbgp2ka30wjxaxf3b1due0qzvhee6uh7w1izcg8rfegnvfj5g61y08iowlscer59567oshcpx88qfmzx093dy2bno44n4n4xpz8q8zkjhc1p5zhzyvi9f6tjwyqt1m95umthcq207dc9lab64i7yexzwsk28ueb2lquzcepyrzmpbi6y1tizr62oayjn2669wvglah17z5kqy55hmrd978w6um88699pdr67s71ljvpwby5cp4dlnp4fykk7ma2xtmbhp2ajug99qpgtchmitvc5s4tevuyp2nwv42lewie3qt398i36u0lfxsuhypqig634k0y2yc6116uuiberfyzi33bq6tjlirkbc722ye3pw7onng8qp0yrapkugm1ooigmzl2sq2r1x7fydt2f2hbkakx7qzlf1kga2rcnre0r95j48bmifpxgnf6az62t9x9544uzftthwdz9gr40bnunf7iy2ljv38ryga7zs7d2o7ze3mk50wwhkjduxbzsqvv1o3gpfxtlxdonfi05rt84j6ajuz2i7499zjnx3uo85k1bo2wei3hi5xss0hwq8cuu837e3prxej5033zat9euzzos3fd9kl40owskmrpk3hljrzxmj0g0ihoohjy4p17fiz62n016msaejl617zahf5na2qxkgkgomsv5adgk87tzrqcpdz795s0cb2xmbwufvimrjzn02mqflj22epo8u6lw75etr0cxtba7pl5dce87rvirrpkjvtdhd7xw6cua87gl9mxd4pdjwjh207ja9h8gv1p14fedtz7kmblom2if3qhons99xi8u5o7swa66ln2ijgocsucrl9hkzrs84kjqnannqi7877xq285ck3tuh3awx7766trnshqjdipboawyuhz3bnkfo23y8qo33wtjdlldsemfxrqpjplvej1einysb1phsddioiyc7xlm5fuisrneln9wzxw3ifw5w4j4ballhg83sa8dy8zxykq09svdfmyrxom8ul441p9wm0aidbmqja4pn0pgoc5tni2xsgvzgh6k8pz730egnwjlazlkrmakuy8a8sopb5gfepeqdsssuecf4ejmjsrxhc47l3qdqz4b3eh7xdmkosmqhuf4q21vyxenioz17o2bfwdduddi1z4lptrse4sfzbqxxkxgu5slpqarqgiqtv031uisgswdfqpf7yfsp6suucu5x0gfky65o3ja0iab4c7fgwtpwgz5x3f2vn60xhhlmea628oca070e7bkxufntaxmpd6z0gumxvkgy7xmlcxb4a1m0pc4bjdq35nwdiufcf9s85kr4hwj1fw9vr33rrgsgg0xrqcudb43zeehmtr9qdxh2otlfcaitg103yqvp9kyd1snyrourskp3qxcr73as59fivufa0pjwr2cskply6hazv6n6gxlipxlmq0i9cuqtvkmytpf37ejhhkt72o1jbmjpnzthsimukyhwu79nw5nvnqaktvwd2hufcolkm415rj3xpnr24eswhk0ifhpq0wecfohjykoofl49enlejtslvggzf7skd5i1pzi6mpqz9lk0iodnamlf1gxfg48e4bpslj0op9n6cf472e0m3iniv4q6f76h5ne0oymz1xh8cwkpuzy1ww01lr5hqa5xzx1j58ersyye631a19p7bwiu9wkye4ikziij99os2by6zocnbk0xz7fsc6d1315941fyuw9d4lzw7ajowuwb36y7sjp1k6v2mik1v11zosmmi4tafs613n1zm52xnfz5lm0x2z8o8sit9ynn5k9vwl0ebsmwrtf4jwg96wxg6ko3b0scohtendzzf97qfuo6447yx55k2ioqj9r62pcp4y3kmxpt2tipihk9kuzua91s30qhahkx32d7xa4r0eqahsura2pthfm6xvb920b9nx7848h32jns0zc7sdq23y5tl8mjy3zheldr8kgrfrizff8vymmpinamk04w53ed7d3jop6sjbupcnsa6sk8syg0ccx0tezf1bfoqawysa7p3gimhnnilt9oj8fomzxdza59olflbsq0d4bea8gpdfcc984if2126n38ly1xw05g22kft21y3lk0oigtwc6mc6ou57ln7527gavsawiocvbxu76adh5ntz0vsrwoi06ole1oly0qywdvwwy84tmj6ql84x0exva309xb01g4e7pn4iqbcufhptqsx592d468g3z22svcr9zpsfzmw7lgamd68iujebw8kyl5p4a94n6aysx4wha06606ip7tbncw7c7yc7cph073c7t46i0gv367sg7gh9wtrbuvs2ewnzsvhpjiynhfdf9mie3md5b0gyxf4phb5ky9yw09fenkprcuv9708niwbpn9qowt579d9wwgzbnsuaoxamsdvn5e2ro36vhcw2805xuosx3zm8b2kwtlygn1kq1danhyygmkch3geil8n7yplu1zqqxu5yozdnv9psvq0jbj6jldk1gznke7lnp0dpzkngv3la648uxd0q73umqvtccgcv1l7tnum936k6rgx9wnmcq7t5p0rp81vfz2zktgngzp1u8dq0hq7v146sx5kqnu2hjwv5h7g4jykhnkk2ogm826kbwq1k0x5z280g0b1j58jm7jqajc06l839xmc12d0f93h0hyk1nxsqnd8h9kqpcnqeg0f7y2q9fk0q9h8ye7his4yzkga83z2ee5z9so9h1vkq1p54ajnp70274lp 00:41:07.444 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:41:07.444 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:41:07.444 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:41:07.444 01:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:41:07.444 { 00:41:07.444 "subsystems": [ 00:41:07.444 { 00:41:07.444 "subsystem": "bdev", 00:41:07.444 "config": [ 00:41:07.444 { 00:41:07.444 "params": { 00:41:07.444 "trtype": "pcie", 00:41:07.444 "traddr": "0000:00:10.0", 00:41:07.444 "name": "Nvme0" 00:41:07.444 }, 00:41:07.444 "method": "bdev_nvme_attach_controller" 00:41:07.444 }, 00:41:07.444 { 00:41:07.444 "method": "bdev_wait_for_examine" 00:41:07.444 } 00:41:07.444 ] 00:41:07.444 } 00:41:07.444 ] 00:41:07.444 } 00:41:07.444 [2024-07-25 01:07:29.983261] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:07.444 [2024-07-25 01:07:29.983468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166016 ] 00:41:07.765 [2024-07-25 01:07:30.161949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.765 [2024-07-25 01:07:30.352576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:09.267  Copying: 4096/4096 [B] (average 4000 kBps) 00:41:09.267 00:41:09.526 01:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:41:09.526 01:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:41:09.526 01:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:41:09.526 01:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:41:09.526 { 00:41:09.526 "subsystems": [ 00:41:09.526 { 00:41:09.526 "subsystem": "bdev", 00:41:09.526 "config": [ 00:41:09.526 { 00:41:09.526 "params": { 00:41:09.526 "trtype": "pcie", 00:41:09.526 "traddr": "0000:00:10.0", 00:41:09.526 "name": "Nvme0" 00:41:09.526 }, 00:41:09.526 "method": "bdev_nvme_attach_controller" 00:41:09.526 }, 00:41:09.526 { 00:41:09.526 "method": "bdev_wait_for_examine" 00:41:09.526 } 00:41:09.526 ] 00:41:09.526 } 00:41:09.526 ] 00:41:09.526 } 00:41:09.526 [2024-07-25 01:07:32.012478] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:09.526 [2024-07-25 01:07:32.012692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166049 ] 00:41:09.784 [2024-07-25 01:07:32.192324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:09.784 [2024-07-25 01:07:32.376988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:11.728  Copying: 4096/4096 [B] (average 4000 kBps) 00:41:11.728 00:41:11.728 01:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ ime8k59k3ry6l3q6zt48fmd7sacvzkf6xflp0b2e45o06ps8rfpnj7462f57e3pp14wx2wu6go0i45gmc2dylucra37m08p96bgc99qryq4t8qbx7qed7olrvs56gbhwp8l7r6jjczxxcob0si7y7qeb7fpr9vij3nyzmifh66nxtuhr93m69l7wod19h08zlifawn0tb5izxr48ezqdd159f9mesht9r0jama7ok7j8pwp06nnv39r77xrq611zpixjnkd05io1ftaat7x8401dev0gdchsde3rtm4onbo6ehaa0l5cjhuruwumwj0hbo1jjiq074mtp3cm8qcjmbqpf4365ifs2lcldt4tj5j7xivdkjpckppq3nhwd2na0g1mml2l4q1tkkfdgyt7os6iwq6aw3t9c96df3pz2s0qhx0l2vq30oq2dyo6sgk280zwrvbvud1r8qzgmuhnvmtb0emkx4u0d6uadc1lgvzdul2a5q37bcftsq782gaw596lah45wvkze70x0w6zsyls52nd5fzpa382zsglx3qxx3hsd6g6efthem7tjqsy1n5xdl0kw2sosemypjh1fhmncw99qy6hkv3ywd8z3jv9kbzq9wifvdyqp1ycoyt14dtetgioo2xez3safkvh5blt77a9nsj2tr3n4hiuqvovc3uh82ffvuhr1ehuuyynotjrrfm1uuqoqqn58jcdtifxymdcmm6g3tbldvgedgmqi7ckugnowvjptyf89onef1dwbvugqg37b43uvec6im0liftpygju6mardotutfbbtk65f74v0vx76cwyk0iwaebte00q2ufm350hf7bvzygwdn39uwifqqlxdwgcqq9bumlh1tehmmomev1p8b5ts62ie9cj2edow990ks8gmre48c45c1rx56borzazpdz3t7i9x51n6dx29sowx2x8rg86za5x44lvb07njcd1s8cyg5kh2hdvrq8ql0hzv10w3dmek7grteb0rdfgihcm0vjhtudm87l2wf8yxjxcbgn5uq0bablwb2ad5y0y8kwg0yhsy8k1fchajyojeufvags2pdxbn8ess8xvgn4xak1nq126ghsote5kfql0b48ixr30ryftvria6dmuj2r42ozv6op4ftlbae6dgp66vahhsqjgr017bm7bpm09avx22swqc0e2oytt7c029gh8400fxxfpu070hobp69tudr7s20cr0x0m68qi7ppp2qihkz4e6p1qtvkdzgkynshn35zdp3wm000edlrazfz5hblktd87hy6se123lpbbgp2ka30wjxaxf3b1due0qzvhee6uh7w1izcg8rfegnvfj5g61y08iowlscer59567oshcpx88qfmzx093dy2bno44n4n4xpz8q8zkjhc1p5zhzyvi9f6tjwyqt1m95umthcq207dc9lab64i7yexzwsk28ueb2lquzcepyrzmpbi6y1tizr62oayjn2669wvglah17z5kqy55hmrd978w6um88699pdr67s71ljvpwby5cp4dlnp4fykk7ma2xtmbhp2ajug99qpgtchmitvc5s4tevuyp2nwv42lewie3qt398i36u0lfxsuhypqig634k0y2yc6116uuiberfyzi33bq6tjlirkbc722ye3pw7onng8qp0yrapkugm1ooigmzl2sq2r1x7fydt2f2hbkakx7qzlf1kga2rcnre0r95j48bmifpxgnf6az62t9x9544uzftthwdz9gr40bnunf7iy2ljv38ryga7zs7d2o7ze3mk50wwhkjduxbzsqvv1o3gpfxtlxdonfi05rt84j6ajuz2i7499zjnx3uo85k1bo2wei3hi5xss0hwq8cuu837e3prxej5033zat9euzzos3fd9kl40owskmrpk3hljrzxmj0g0ihoohjy4p17fiz62n016msaejl617zahf5na2qxkgkgomsv5adgk87tzrqcpdz795s0cb2xmbwufvimrjzn02mqflj22epo8u6lw75etr0cxtba7pl5dce87rvirrpkjvtdhd7xw6cua87gl9mxd4pdjwjh207ja9h8gv1p14fedtz7kmblom2if3qhons99xi8u5o7swa66ln2ijgocsucrl9hkzrs84kjqnannqi7877xq285ck3tuh3awx7766trnshqjdipboawyuhz3bnkfo23y8qo33wtjdlldsemfxrqpjplvej1einysb1phsddioiyc7xlm5fuisrneln9wzxw3ifw5w4j4ballhg83sa8dy8zxykq09svdfmyrxom8ul441p9wm0aidbmqja4pn0pgoc5tni2xsgvzgh6k8pz730egnwjlazlkrmakuy8a8sopb5gfepeqdsssuecf4ejmjsrxhc47l3qdqz4b3eh7xdmkosmqhuf4q21vyxenioz17o2bfwdduddi1z4lptrse4sfzbqxxkxgu5slpqarqgiqtv031uisgswdfqpf7yfsp6suucu5x0gfky65o3ja0iab4c7fgwtpwgz5x3f2vn60xhhlmea628oca070e7bkxufntaxmpd6z0gumxvkgy7xmlcxb4a1m0pc4bjdq35nwdiufcf9s85kr4hwj1fw9vr33rrgsgg0xrqcudb43zeehmtr9qdxh2otlfcaitg103yqvp9kyd1snyrourskp3qxcr73as59fivufa0pjwr2cskply6hazv6n6gxlipxlmq0i9cuqtvkmytpf37ejhhkt72o1jbmjpnzthsimukyhwu79nw5nvnqaktvwd2hufcolkm415rj3xpnr24eswhk0ifhpq0wecfohjykoofl49enlejtslvggzf7skd5i1pzi6mpqz9lk0iodnamlf1gxfg48e4bpslj0op9n6cf472e0m3iniv4q6f76h5ne0oymz1xh8cwkpuzy1ww01lr5hqa5xzx1j58ersyye631a19p7bwiu9wkye4ikziij99os2by6zocnbk0xz7fsc6d1315941fyuw9d4lzw7ajowuwb36y7sjp1k6v2mik1v11zosmmi4tafs613n1zm52xnfz5lm0x2z8o8sit9ynn5k9vwl0ebsmwrtf4jwg96wxg6ko3b0scohtendzzf97qfuo6447yx55k2ioqj9r62pcp4y3kmxpt2tipihk9kuzua91s30qhahkx32d7xa4r0eqahsura2pthfm6xvb920b9nx7848h32jns0zc7sdq23y5tl8mjy3zheldr8kgrfrizff8vymmpinamk04w53ed7d3jop6sjbupcnsa6sk8syg0ccx0tezf1bfoqawysa7p3gimhnnilt9oj8fomzxdza59olflbsq0d4bea8gpdfcc984if2126n38ly1xw05g22kft21y3lk0oigtwc6mc6ou57ln7527gavsawiocvbxu76adh5ntz0vsrwoi06ole1oly0qywdvwwy84tmj6ql84x0exva309xb01g4e7pn4iqbcufhptqsx592d468g3z22svcr9zpsfzmw7lgamd68iujebw8kyl5p4a94n6aysx4wha06606ip7tbncw7c7yc7cph073c7t46i0gv367sg7gh9wtrbuvs2ewnzsvhpjiynhfdf9mie3md5b0gyxf4phb5ky9yw09fenkprcuv9708niwbpn9qowt579d9wwgzbnsuaoxamsdvn5e2ro36vhcw2805xuosx3zm8b2kwtlygn1kq1danhyygmkch3geil8n7yplu1zqqxu5yozdnv9psvq0jbj6jldk1gznke7lnp0dpzkngv3la648uxd0q73umqvtccgcv1l7tnum936k6rgx9wnmcq7t5p0rp81vfz2zktgngzp1u8dq0hq7v146sx5kqnu2hjwv5h7g4jykhnkk2ogm826kbwq1k0x5z280g0b1j58jm7jqajc06l839xmc12d0f93h0hyk1nxsqnd8h9kqpcnqeg0f7y2q9fk0q9h8ye7his4yzkga83z2ee5z9so9h1vkq1p54ajnp70274lp == \i\m\e\8\k\5\9\k\3\r\y\6\l\3\q\6\z\t\4\8\f\m\d\7\s\a\c\v\z\k\f\6\x\f\l\p\0\b\2\e\4\5\o\0\6\p\s\8\r\f\p\n\j\7\4\6\2\f\5\7\e\3\p\p\1\4\w\x\2\w\u\6\g\o\0\i\4\5\g\m\c\2\d\y\l\u\c\r\a\3\7\m\0\8\p\9\6\b\g\c\9\9\q\r\y\q\4\t\8\q\b\x\7\q\e\d\7\o\l\r\v\s\5\6\g\b\h\w\p\8\l\7\r\6\j\j\c\z\x\x\c\o\b\0\s\i\7\y\7\q\e\b\7\f\p\r\9\v\i\j\3\n\y\z\m\i\f\h\6\6\n\x\t\u\h\r\9\3\m\6\9\l\7\w\o\d\1\9\h\0\8\z\l\i\f\a\w\n\0\t\b\5\i\z\x\r\4\8\e\z\q\d\d\1\5\9\f\9\m\e\s\h\t\9\r\0\j\a\m\a\7\o\k\7\j\8\p\w\p\0\6\n\n\v\3\9\r\7\7\x\r\q\6\1\1\z\p\i\x\j\n\k\d\0\5\i\o\1\f\t\a\a\t\7\x\8\4\0\1\d\e\v\0\g\d\c\h\s\d\e\3\r\t\m\4\o\n\b\o\6\e\h\a\a\0\l\5\c\j\h\u\r\u\w\u\m\w\j\0\h\b\o\1\j\j\i\q\0\7\4\m\t\p\3\c\m\8\q\c\j\m\b\q\p\f\4\3\6\5\i\f\s\2\l\c\l\d\t\4\t\j\5\j\7\x\i\v\d\k\j\p\c\k\p\p\q\3\n\h\w\d\2\n\a\0\g\1\m\m\l\2\l\4\q\1\t\k\k\f\d\g\y\t\7\o\s\6\i\w\q\6\a\w\3\t\9\c\9\6\d\f\3\p\z\2\s\0\q\h\x\0\l\2\v\q\3\0\o\q\2\d\y\o\6\s\g\k\2\8\0\z\w\r\v\b\v\u\d\1\r\8\q\z\g\m\u\h\n\v\m\t\b\0\e\m\k\x\4\u\0\d\6\u\a\d\c\1\l\g\v\z\d\u\l\2\a\5\q\3\7\b\c\f\t\s\q\7\8\2\g\a\w\5\9\6\l\a\h\4\5\w\v\k\z\e\7\0\x\0\w\6\z\s\y\l\s\5\2\n\d\5\f\z\p\a\3\8\2\z\s\g\l\x\3\q\x\x\3\h\s\d\6\g\6\e\f\t\h\e\m\7\t\j\q\s\y\1\n\5\x\d\l\0\k\w\2\s\o\s\e\m\y\p\j\h\1\f\h\m\n\c\w\9\9\q\y\6\h\k\v\3\y\w\d\8\z\3\j\v\9\k\b\z\q\9\w\i\f\v\d\y\q\p\1\y\c\o\y\t\1\4\d\t\e\t\g\i\o\o\2\x\e\z\3\s\a\f\k\v\h\5\b\l\t\7\7\a\9\n\s\j\2\t\r\3\n\4\h\i\u\q\v\o\v\c\3\u\h\8\2\f\f\v\u\h\r\1\e\h\u\u\y\y\n\o\t\j\r\r\f\m\1\u\u\q\o\q\q\n\5\8\j\c\d\t\i\f\x\y\m\d\c\m\m\6\g\3\t\b\l\d\v\g\e\d\g\m\q\i\7\c\k\u\g\n\o\w\v\j\p\t\y\f\8\9\o\n\e\f\1\d\w\b\v\u\g\q\g\3\7\b\4\3\u\v\e\c\6\i\m\0\l\i\f\t\p\y\g\j\u\6\m\a\r\d\o\t\u\t\f\b\b\t\k\6\5\f\7\4\v\0\v\x\7\6\c\w\y\k\0\i\w\a\e\b\t\e\0\0\q\2\u\f\m\3\5\0\h\f\7\b\v\z\y\g\w\d\n\3\9\u\w\i\f\q\q\l\x\d\w\g\c\q\q\9\b\u\m\l\h\1\t\e\h\m\m\o\m\e\v\1\p\8\b\5\t\s\6\2\i\e\9\c\j\2\e\d\o\w\9\9\0\k\s\8\g\m\r\e\4\8\c\4\5\c\1\r\x\5\6\b\o\r\z\a\z\p\d\z\3\t\7\i\9\x\5\1\n\6\d\x\2\9\s\o\w\x\2\x\8\r\g\8\6\z\a\5\x\4\4\l\v\b\0\7\n\j\c\d\1\s\8\c\y\g\5\k\h\2\h\d\v\r\q\8\q\l\0\h\z\v\1\0\w\3\d\m\e\k\7\g\r\t\e\b\0\r\d\f\g\i\h\c\m\0\v\j\h\t\u\d\m\8\7\l\2\w\f\8\y\x\j\x\c\b\g\n\5\u\q\0\b\a\b\l\w\b\2\a\d\5\y\0\y\8\k\w\g\0\y\h\s\y\8\k\1\f\c\h\a\j\y\o\j\e\u\f\v\a\g\s\2\p\d\x\b\n\8\e\s\s\8\x\v\g\n\4\x\a\k\1\n\q\1\2\6\g\h\s\o\t\e\5\k\f\q\l\0\b\4\8\i\x\r\3\0\r\y\f\t\v\r\i\a\6\d\m\u\j\2\r\4\2\o\z\v\6\o\p\4\f\t\l\b\a\e\6\d\g\p\6\6\v\a\h\h\s\q\j\g\r\0\1\7\b\m\7\b\p\m\0\9\a\v\x\2\2\s\w\q\c\0\e\2\o\y\t\t\7\c\0\2\9\g\h\8\4\0\0\f\x\x\f\p\u\0\7\0\h\o\b\p\6\9\t\u\d\r\7\s\2\0\c\r\0\x\0\m\6\8\q\i\7\p\p\p\2\q\i\h\k\z\4\e\6\p\1\q\t\v\k\d\z\g\k\y\n\s\h\n\3\5\z\d\p\3\w\m\0\0\0\e\d\l\r\a\z\f\z\5\h\b\l\k\t\d\8\7\h\y\6\s\e\1\2\3\l\p\b\b\g\p\2\k\a\3\0\w\j\x\a\x\f\3\b\1\d\u\e\0\q\z\v\h\e\e\6\u\h\7\w\1\i\z\c\g\8\r\f\e\g\n\v\f\j\5\g\6\1\y\0\8\i\o\w\l\s\c\e\r\5\9\5\6\7\o\s\h\c\p\x\8\8\q\f\m\z\x\0\9\3\d\y\2\b\n\o\4\4\n\4\n\4\x\p\z\8\q\8\z\k\j\h\c\1\p\5\z\h\z\y\v\i\9\f\6\t\j\w\y\q\t\1\m\9\5\u\m\t\h\c\q\2\0\7\d\c\9\l\a\b\6\4\i\7\y\e\x\z\w\s\k\2\8\u\e\b\2\l\q\u\z\c\e\p\y\r\z\m\p\b\i\6\y\1\t\i\z\r\6\2\o\a\y\j\n\2\6\6\9\w\v\g\l\a\h\1\7\z\5\k\q\y\5\5\h\m\r\d\9\7\8\w\6\u\m\8\8\6\9\9\p\d\r\6\7\s\7\1\l\j\v\p\w\b\y\5\c\p\4\d\l\n\p\4\f\y\k\k\7\m\a\2\x\t\m\b\h\p\2\a\j\u\g\9\9\q\p\g\t\c\h\m\i\t\v\c\5\s\4\t\e\v\u\y\p\2\n\w\v\4\2\l\e\w\i\e\3\q\t\3\9\8\i\3\6\u\0\l\f\x\s\u\h\y\p\q\i\g\6\3\4\k\0\y\2\y\c\6\1\1\6\u\u\i\b\e\r\f\y\z\i\3\3\b\q\6\t\j\l\i\r\k\b\c\7\2\2\y\e\3\p\w\7\o\n\n\g\8\q\p\0\y\r\a\p\k\u\g\m\1\o\o\i\g\m\z\l\2\s\q\2\r\1\x\7\f\y\d\t\2\f\2\h\b\k\a\k\x\7\q\z\l\f\1\k\g\a\2\r\c\n\r\e\0\r\9\5\j\4\8\b\m\i\f\p\x\g\n\f\6\a\z\6\2\t\9\x\9\5\4\4\u\z\f\t\t\h\w\d\z\9\g\r\4\0\b\n\u\n\f\7\i\y\2\l\j\v\3\8\r\y\g\a\7\z\s\7\d\2\o\7\z\e\3\m\k\5\0\w\w\h\k\j\d\u\x\b\z\s\q\v\v\1\o\3\g\p\f\x\t\l\x\d\o\n\f\i\0\5\r\t\8\4\j\6\a\j\u\z\2\i\7\4\9\9\z\j\n\x\3\u\o\8\5\k\1\b\o\2\w\e\i\3\h\i\5\x\s\s\0\h\w\q\8\c\u\u\8\3\7\e\3\p\r\x\e\j\5\0\3\3\z\a\t\9\e\u\z\z\o\s\3\f\d\9\k\l\4\0\o\w\s\k\m\r\p\k\3\h\l\j\r\z\x\m\j\0\g\0\i\h\o\o\h\j\y\4\p\1\7\f\i\z\6\2\n\0\1\6\m\s\a\e\j\l\6\1\7\z\a\h\f\5\n\a\2\q\x\k\g\k\g\o\m\s\v\5\a\d\g\k\8\7\t\z\r\q\c\p\d\z\7\9\5\s\0\c\b\2\x\m\b\w\u\f\v\i\m\r\j\z\n\0\2\m\q\f\l\j\2\2\e\p\o\8\u\6\l\w\7\5\e\t\r\0\c\x\t\b\a\7\p\l\5\d\c\e\8\7\r\v\i\r\r\p\k\j\v\t\d\h\d\7\x\w\6\c\u\a\8\7\g\l\9\m\x\d\4\p\d\j\w\j\h\2\0\7\j\a\9\h\8\g\v\1\p\1\4\f\e\d\t\z\7\k\m\b\l\o\m\2\i\f\3\q\h\o\n\s\9\9\x\i\8\u\5\o\7\s\w\a\6\6\l\n\2\i\j\g\o\c\s\u\c\r\l\9\h\k\z\r\s\8\4\k\j\q\n\a\n\n\q\i\7\8\7\7\x\q\2\8\5\c\k\3\t\u\h\3\a\w\x\7\7\6\6\t\r\n\s\h\q\j\d\i\p\b\o\a\w\y\u\h\z\3\b\n\k\f\o\2\3\y\8\q\o\3\3\w\t\j\d\l\l\d\s\e\m\f\x\r\q\p\j\p\l\v\e\j\1\e\i\n\y\s\b\1\p\h\s\d\d\i\o\i\y\c\7\x\l\m\5\f\u\i\s\r\n\e\l\n\9\w\z\x\w\3\i\f\w\5\w\4\j\4\b\a\l\l\h\g\8\3\s\a\8\d\y\8\z\x\y\k\q\0\9\s\v\d\f\m\y\r\x\o\m\8\u\l\4\4\1\p\9\w\m\0\a\i\d\b\m\q\j\a\4\p\n\0\p\g\o\c\5\t\n\i\2\x\s\g\v\z\g\h\6\k\8\p\z\7\3\0\e\g\n\w\j\l\a\z\l\k\r\m\a\k\u\y\8\a\8\s\o\p\b\5\g\f\e\p\e\q\d\s\s\s\u\e\c\f\4\e\j\m\j\s\r\x\h\c\4\7\l\3\q\d\q\z\4\b\3\e\h\7\x\d\m\k\o\s\m\q\h\u\f\4\q\2\1\v\y\x\e\n\i\o\z\1\7\o\2\b\f\w\d\d\u\d\d\i\1\z\4\l\p\t\r\s\e\4\s\f\z\b\q\x\x\k\x\g\u\5\s\l\p\q\a\r\q\g\i\q\t\v\0\3\1\u\i\s\g\s\w\d\f\q\p\f\7\y\f\s\p\6\s\u\u\c\u\5\x\0\g\f\k\y\6\5\o\3\j\a\0\i\a\b\4\c\7\f\g\w\t\p\w\g\z\5\x\3\f\2\v\n\6\0\x\h\h\l\m\e\a\6\2\8\o\c\a\0\7\0\e\7\b\k\x\u\f\n\t\a\x\m\p\d\6\z\0\g\u\m\x\v\k\g\y\7\x\m\l\c\x\b\4\a\1\m\0\p\c\4\b\j\d\q\3\5\n\w\d\i\u\f\c\f\9\s\8\5\k\r\4\h\w\j\1\f\w\9\v\r\3\3\r\r\g\s\g\g\0\x\r\q\c\u\d\b\4\3\z\e\e\h\m\t\r\9\q\d\x\h\2\o\t\l\f\c\a\i\t\g\1\0\3\y\q\v\p\9\k\y\d\1\s\n\y\r\o\u\r\s\k\p\3\q\x\c\r\7\3\a\s\5\9\f\i\v\u\f\a\0\p\j\w\r\2\c\s\k\p\l\y\6\h\a\z\v\6\n\6\g\x\l\i\p\x\l\m\q\0\i\9\c\u\q\t\v\k\m\y\t\p\f\3\7\e\j\h\h\k\t\7\2\o\1\j\b\m\j\p\n\z\t\h\s\i\m\u\k\y\h\w\u\7\9\n\w\5\n\v\n\q\a\k\t\v\w\d\2\h\u\f\c\o\l\k\m\4\1\5\r\j\3\x\p\n\r\2\4\e\s\w\h\k\0\i\f\h\p\q\0\w\e\c\f\o\h\j\y\k\o\o\f\l\4\9\e\n\l\e\j\t\s\l\v\g\g\z\f\7\s\k\d\5\i\1\p\z\i\6\m\p\q\z\9\l\k\0\i\o\d\n\a\m\l\f\1\g\x\f\g\4\8\e\4\b\p\s\l\j\0\o\p\9\n\6\c\f\4\7\2\e\0\m\3\i\n\i\v\4\q\6\f\7\6\h\5\n\e\0\o\y\m\z\1\x\h\8\c\w\k\p\u\z\y\1\w\w\0\1\l\r\5\h\q\a\5\x\z\x\1\j\5\8\e\r\s\y\y\e\6\3\1\a\1\9\p\7\b\w\i\u\9\w\k\y\e\4\i\k\z\i\i\j\9\9\o\s\2\b\y\6\z\o\c\n\b\k\0\x\z\7\f\s\c\6\d\1\3\1\5\9\4\1\f\y\u\w\9\d\4\l\z\w\7\a\j\o\w\u\w\b\3\6\y\7\s\j\p\1\k\6\v\2\m\i\k\1\v\1\1\z\o\s\m\m\i\4\t\a\f\s\6\1\3\n\1\z\m\5\2\x\n\f\z\5\l\m\0\x\2\z\8\o\8\s\i\t\9\y\n\n\5\k\9\v\w\l\0\e\b\s\m\w\r\t\f\4\j\w\g\9\6\w\x\g\6\k\o\3\b\0\s\c\o\h\t\e\n\d\z\z\f\9\7\q\f\u\o\6\4\4\7\y\x\5\5\k\2\i\o\q\j\9\r\6\2\p\c\p\4\y\3\k\m\x\p\t\2\t\i\p\i\h\k\9\k\u\z\u\a\9\1\s\3\0\q\h\a\h\k\x\3\2\d\7\x\a\4\r\0\e\q\a\h\s\u\r\a\2\p\t\h\f\m\6\x\v\b\9\2\0\b\9\n\x\7\8\4\8\h\3\2\j\n\s\0\z\c\7\s\d\q\2\3\y\5\t\l\8\m\j\y\3\z\h\e\l\d\r\8\k\g\r\f\r\i\z\f\f\8\v\y\m\m\p\i\n\a\m\k\0\4\w\5\3\e\d\7\d\3\j\o\p\6\s\j\b\u\p\c\n\s\a\6\s\k\8\s\y\g\0\c\c\x\0\t\e\z\f\1\b\f\o\q\a\w\y\s\a\7\p\3\g\i\m\h\n\n\i\l\t\9\o\j\8\f\o\m\z\x\d\z\a\5\9\o\l\f\l\b\s\q\0\d\4\b\e\a\8\g\p\d\f\c\c\9\8\4\i\f\2\1\2\6\n\3\8\l\y\1\x\w\0\5\g\2\2\k\f\t\2\1\y\3\l\k\0\o\i\g\t\w\c\6\m\c\6\o\u\5\7\l\n\7\5\2\7\g\a\v\s\a\w\i\o\c\v\b\x\u\7\6\a\d\h\5\n\t\z\0\v\s\r\w\o\i\0\6\o\l\e\1\o\l\y\0\q\y\w\d\v\w\w\y\8\4\t\m\j\6\q\l\8\4\x\0\e\x\v\a\3\0\9\x\b\0\1\g\4\e\7\p\n\4\i\q\b\c\u\f\h\p\t\q\s\x\5\9\2\d\4\6\8\g\3\z\2\2\s\v\c\r\9\z\p\s\f\z\m\w\7\l\g\a\m\d\6\8\i\u\j\e\b\w\8\k\y\l\5\p\4\a\9\4\n\6\a\y\s\x\4\w\h\a\0\6\6\0\6\i\p\7\t\b\n\c\w\7\c\7\y\c\7\c\p\h\0\7\3\c\7\t\4\6\i\0\g\v\3\6\7\s\g\7\g\h\9\w\t\r\b\u\v\s\2\e\w\n\z\s\v\h\p\j\i\y\n\h\f\d\f\9\m\i\e\3\m\d\5\b\0\g\y\x\f\4\p\h\b\5\k\y\9\y\w\0\9\f\e\n\k\p\r\c\u\v\9\7\0\8\n\i\w\b\p\n\9\q\o\w\t\5\7\9\d\9\w\w\g\z\b\n\s\u\a\o\x\a\m\s\d\v\n\5\e\2\r\o\3\6\v\h\c\w\2\8\0\5\x\u\o\s\x\3\z\m\8\b\2\k\w\t\l\y\g\n\1\k\q\1\d\a\n\h\y\y\g\m\k\c\h\3\g\e\i\l\8\n\7\y\p\l\u\1\z\q\q\x\u\5\y\o\z\d\n\v\9\p\s\v\q\0\j\b\j\6\j\l\d\k\1\g\z\n\k\e\7\l\n\p\0\d\p\z\k\n\g\v\3\l\a\6\4\8\u\x\d\0\q\7\3\u\m\q\v\t\c\c\g\c\v\1\l\7\t\n\u\m\9\3\6\k\6\r\g\x\9\w\n\m\c\q\7\t\5\p\0\r\p\8\1\v\f\z\2\z\k\t\g\n\g\z\p\1\u\8\d\q\0\h\q\7\v\1\4\6\s\x\5\k\q\n\u\2\h\j\w\v\5\h\7\g\4\j\y\k\h\n\k\k\2\o\g\m\8\2\6\k\b\w\q\1\k\0\x\5\z\2\8\0\g\0\b\1\j\5\8\j\m\7\j\q\a\j\c\0\6\l\8\3\9\x\m\c\1\2\d\0\f\9\3\h\0\h\y\k\1\n\x\s\q\n\d\8\h\9\k\q\p\c\n\q\e\g\0\f\7\y\2\q\9\f\k\0\q\9\h\8\y\e\7\h\i\s\4\y\z\k\g\a\8\3\z\2\e\e\5\z\9\s\o\9\h\1\v\k\q\1\p\5\4\a\j\n\p\7\0\2\7\4\l\p ]] 00:41:11.729 ************************************ 00:41:11.729 END TEST dd_rw_offset 00:41:11.729 ************************************ 00:41:11.729 00:41:11.729 real 0m4.225s 00:41:11.729 user 0m3.598s 00:41:11.729 sys 0m0.479s 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:41:11.729 01:07:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:41:11.729 { 00:41:11.729 "subsystems": [ 00:41:11.729 { 00:41:11.729 "subsystem": "bdev", 00:41:11.729 "config": [ 00:41:11.729 { 00:41:11.729 "params": { 00:41:11.729 "trtype": "pcie", 00:41:11.729 "traddr": "0000:00:10.0", 00:41:11.729 "name": "Nvme0" 00:41:11.729 }, 00:41:11.729 "method": "bdev_nvme_attach_controller" 00:41:11.729 }, 00:41:11.729 { 00:41:11.729 "method": "bdev_wait_for_examine" 00:41:11.729 } 00:41:11.729 ] 00:41:11.729 } 00:41:11.729 ] 00:41:11.729 } 00:41:11.729 [2024-07-25 01:07:34.202710] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:11.729 [2024-07-25 01:07:34.202926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166095 ] 00:41:11.988 [2024-07-25 01:07:34.383935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:11.988 [2024-07-25 01:07:34.577895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:13.934  Copying: 1024/1024 [kB] (average 1000 MBps) 00:41:13.934 00:41:13.934 01:07:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:13.934 ************************************ 00:41:13.934 END TEST spdk_dd_basic_rw 00:41:13.934 ************************************ 00:41:13.934 00:41:13.934 real 0m49.468s 00:41:13.934 user 0m41.469s 00:41:13.934 sys 0m6.283s 00:41:13.934 01:07:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:13.934 01:07:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:41:13.934 01:07:36 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:41:13.934 01:07:36 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:13.934 01:07:36 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:13.934 01:07:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:13.934 ************************************ 00:41:13.934 START TEST spdk_dd_posix 00:41:13.934 ************************************ 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:41:13.934 * Looking for test storage... 00:41:13.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:41:13.934 * First test run, using AIO 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:13.934 ************************************ 00:41:13.934 START TEST dd_flag_append 00:41:13.934 ************************************ 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=urnd7la7lmclgfytk9kmxzi1dndry765 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=72bbw0uswing9ltsnjdzn1hqhor8mr47 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s urnd7la7lmclgfytk9kmxzi1dndry765 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 72bbw0uswing9ltsnjdzn1hqhor8mr47 00:41:13.934 01:07:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:41:13.934 [2024-07-25 01:07:36.467672] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:13.934 [2024-07-25 01:07:36.468066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166174 ] 00:41:14.194 [2024-07-25 01:07:36.647939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:14.194 [2024-07-25 01:07:36.845354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:16.143  Copying: 32/32 [B] (average 31 kBps) 00:41:16.143 00:41:16.143 ************************************ 00:41:16.143 END TEST dd_flag_append 00:41:16.143 ************************************ 00:41:16.143 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 72bbw0uswing9ltsnjdzn1hqhor8mr47urnd7la7lmclgfytk9kmxzi1dndry765 == \7\2\b\b\w\0\u\s\w\i\n\g\9\l\t\s\n\j\d\z\n\1\h\q\h\o\r\8\m\r\4\7\u\r\n\d\7\l\a\7\l\m\c\l\g\f\y\t\k\9\k\m\x\z\i\1\d\n\d\r\y\7\6\5 ]] 00:41:16.143 00:41:16.143 real 0m2.098s 00:41:16.143 user 0m1.718s 00:41:16.143 sys 0m0.245s 00:41:16.143 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:16.143 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:41:16.143 01:07:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:41:16.143 01:07:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:16.143 01:07:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:16.143 01:07:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:16.143 ************************************ 00:41:16.143 START TEST dd_flag_directory 00:41:16.143 ************************************ 00:41:16.143 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:16.144 01:07:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:16.144 [2024-07-25 01:07:38.628367] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:16.144 [2024-07-25 01:07:38.628799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166234 ] 00:41:16.411 [2024-07-25 01:07:38.808315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:16.411 [2024-07-25 01:07:39.000247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:16.670 [2024-07-25 01:07:39.272324] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:16.670 [2024-07-25 01:07:39.272665] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:16.670 [2024-07-25 01:07:39.272732] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:17.607 [2024-07-25 01:07:40.125094] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:18.174 01:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:18.174 [2024-07-25 01:07:40.660488] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:18.174 [2024-07-25 01:07:40.661041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166262 ] 00:41:18.432 [2024-07-25 01:07:40.840455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:18.432 [2024-07-25 01:07:41.031981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.691 [2024-07-25 01:07:41.310892] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:18.691 [2024-07-25 01:07:41.311198] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:18.691 [2024-07-25 01:07:41.311266] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:19.627 [2024-07-25 01:07:42.151332] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:20.195 00:41:20.195 real 0m4.051s 00:41:20.195 user 0m3.361s 00:41:20.195 sys 0m0.480s 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:41:20.195 ************************************ 00:41:20.195 END TEST dd_flag_directory 00:41:20.195 ************************************ 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:20.195 ************************************ 00:41:20.195 START TEST dd_flag_nofollow 00:41:20.195 ************************************ 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:20.195 01:07:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:20.195 [2024-07-25 01:07:42.740977] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:20.195 [2024-07-25 01:07:42.741366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166314 ] 00:41:20.453 [2024-07-25 01:07:42.920509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:20.453 [2024-07-25 01:07:43.099360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:21.020 [2024-07-25 01:07:43.383221] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:41:21.020 [2024-07-25 01:07:43.383546] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:41:21.020 [2024-07-25 01:07:43.383616] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:21.585 [2024-07-25 01:07:44.224237] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:22.152 01:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:22.152 [2024-07-25 01:07:44.720903] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:22.152 [2024-07-25 01:07:44.721298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166342 ] 00:41:22.411 [2024-07-25 01:07:44.879937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:22.667 [2024-07-25 01:07:45.077072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:22.924 [2024-07-25 01:07:45.377886] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:41:22.924 [2024-07-25 01:07:45.378137] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:41:22.924 [2024-07-25 01:07:45.378234] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:23.913 [2024-07-25 01:07:46.198443] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:24.172 01:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:41:24.172 01:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:24.172 01:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:41:24.172 01:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:41:24.172 01:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:41:24.172 01:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:24.172 01:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:41:24.172 01:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:41:24.172 01:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:41:24.172 01:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:24.172 [2024-07-25 01:07:46.741154] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:24.172 [2024-07-25 01:07:46.741354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166362 ] 00:41:24.430 [2024-07-25 01:07:46.920063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:24.688 [2024-07-25 01:07:47.102642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:26.319  Copying: 512/512 [B] (average 500 kBps) 00:41:26.319 00:41:26.319 ************************************ 00:41:26.319 END TEST dd_flag_nofollow 00:41:26.319 ************************************ 00:41:26.319 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ fffh61xcckqu5llke6fahw7xp83gp2iekboeocuf6kk90p9y9svzqewbhpycl1a50mryacvuva1944lv6ijjzrize8lflawn7zwmjd8advp2dmn3h4zwjjhc6kz0szg2njmmp83xe5qwfjudl490h6pvjrwjlkerzu8q4x9kmht2bklyak1rujm2dz7ogv1gprm54n2ozj10jb6d4d3reqweksxtdwgjc3jxn6n4d6l3spcum7eknzko6otqsbfgduucl0h1i1knftq0agkfy4xidyv67e4bqkxgg8d9wocm4fxa3xooa8o68v04zwtl98vil64in1f44rzyn8gr6chttfojwuj3jmhbeetwku3z2cf7cp7crguysg78efr72wmus3e0ip5ciensk6g7p62ei52ynft0uvnncv7kskni7sc9beswrnllxpmkyirawypqhqs1z1gtqdi31yemji913sx761j8loox1o3u9oimgfafrz4pq5tzfet0zj87 == \f\f\f\h\6\1\x\c\c\k\q\u\5\l\l\k\e\6\f\a\h\w\7\x\p\8\3\g\p\2\i\e\k\b\o\e\o\c\u\f\6\k\k\9\0\p\9\y\9\s\v\z\q\e\w\b\h\p\y\c\l\1\a\5\0\m\r\y\a\c\v\u\v\a\1\9\4\4\l\v\6\i\j\j\z\r\i\z\e\8\l\f\l\a\w\n\7\z\w\m\j\d\8\a\d\v\p\2\d\m\n\3\h\4\z\w\j\j\h\c\6\k\z\0\s\z\g\2\n\j\m\m\p\8\3\x\e\5\q\w\f\j\u\d\l\4\9\0\h\6\p\v\j\r\w\j\l\k\e\r\z\u\8\q\4\x\9\k\m\h\t\2\b\k\l\y\a\k\1\r\u\j\m\2\d\z\7\o\g\v\1\g\p\r\m\5\4\n\2\o\z\j\1\0\j\b\6\d\4\d\3\r\e\q\w\e\k\s\x\t\d\w\g\j\c\3\j\x\n\6\n\4\d\6\l\3\s\p\c\u\m\7\e\k\n\z\k\o\6\o\t\q\s\b\f\g\d\u\u\c\l\0\h\1\i\1\k\n\f\t\q\0\a\g\k\f\y\4\x\i\d\y\v\6\7\e\4\b\q\k\x\g\g\8\d\9\w\o\c\m\4\f\x\a\3\x\o\o\a\8\o\6\8\v\0\4\z\w\t\l\9\8\v\i\l\6\4\i\n\1\f\4\4\r\z\y\n\8\g\r\6\c\h\t\t\f\o\j\w\u\j\3\j\m\h\b\e\e\t\w\k\u\3\z\2\c\f\7\c\p\7\c\r\g\u\y\s\g\7\8\e\f\r\7\2\w\m\u\s\3\e\0\i\p\5\c\i\e\n\s\k\6\g\7\p\6\2\e\i\5\2\y\n\f\t\0\u\v\n\n\c\v\7\k\s\k\n\i\7\s\c\9\b\e\s\w\r\n\l\l\x\p\m\k\y\i\r\a\w\y\p\q\h\q\s\1\z\1\g\t\q\d\i\3\1\y\e\m\j\i\9\1\3\s\x\7\6\1\j\8\l\o\o\x\1\o\3\u\9\o\i\m\g\f\a\f\r\z\4\p\q\5\t\z\f\e\t\0\z\j\8\7 ]] 00:41:26.319 00:41:26.319 real 0m6.065s 00:41:26.319 user 0m5.047s 00:41:26.319 sys 0m0.679s 00:41:26.319 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:26.319 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:41:26.319 01:07:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:41:26.319 01:07:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:26.319 01:07:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:26.319 01:07:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:26.319 ************************************ 00:41:26.319 START TEST dd_flag_noatime 00:41:26.319 ************************************ 00:41:26.319 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:41:26.319 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:41:26.319 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:41:26.320 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:41:26.320 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:41:26.320 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:41:26.320 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:26.320 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721869667 00:41:26.320 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:26.320 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721869668 00:41:26.320 01:07:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:41:27.254 01:07:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:27.254 [2024-07-25 01:07:49.884901] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:27.254 [2024-07-25 01:07:49.885118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166434 ] 00:41:27.512 [2024-07-25 01:07:50.068519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.771 [2024-07-25 01:07:50.295010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:29.407  Copying: 512/512 [B] (average 500 kBps) 00:41:29.407 00:41:29.407 01:07:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:29.407 01:07:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721869667 )) 00:41:29.407 01:07:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:29.407 01:07:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721869668 )) 00:41:29.407 01:07:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:29.407 [2024-07-25 01:07:51.991524] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:29.407 [2024-07-25 01:07:51.991730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166464 ] 00:41:29.772 [2024-07-25 01:07:52.169703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:29.772 [2024-07-25 01:07:52.354136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:31.423  Copying: 512/512 [B] (average 500 kBps) 00:41:31.423 00:41:31.423 01:07:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:31.423 01:07:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721869672 )) 00:41:31.423 00:41:31.423 real 0m5.184s 00:41:31.423 user 0m3.438s 00:41:31.423 sys 0m0.482s 00:41:31.423 01:07:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:31.423 01:07:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:41:31.423 ************************************ 00:41:31.423 END TEST dd_flag_noatime 00:41:31.423 ************************************ 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:31.423 ************************************ 00:41:31.423 START TEST dd_flags_misc 00:41:31.423 ************************************ 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:31.423 01:07:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:41:31.682 [2024-07-25 01:07:54.098408] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:31.682 [2024-07-25 01:07:54.098543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166510 ] 00:41:31.682 [2024-07-25 01:07:54.255012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:31.942 [2024-07-25 01:07:54.434233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:33.577  Copying: 512/512 [B] (average 500 kBps) 00:41:33.577 00:41:33.577 01:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0lnliksrtpmkz8d2we73forsubb39w2al3yy3yrhlp9fxgqh93p9fwnzee1al6w5ru0lina1joeg0jdx7d1uvbbrof5xcxjdmuzc99gly0yg6x9unviqv1t2tlmohlu9tr6jpn1d2lrn6xxcnskqddm4vvbw0hz35319jga87g8fix34rfkf8gpy1is1blsyv73gghfo7oyl3ie5fxf0vfs9tg422x5o3pscqa2chwieie1h125gid8wbkv3shzi5m0lsgjk5fu3daiaqixlsunxp6oqfsippy7l9ogca5xruray4kvjtqv63uwkkb6z8fbk0y7lete0sfiuzaks85q11svvivosuuydiz1506ncxrxfbujvz290wx6tf7x8kujmdci5e8gpcuyy5doqsct038lgq96ist81bzmvzh10pgu2t8kmy4201s12tlvgkkw6gdnxe5s6lkdhro50f62s1a4r8wcip8778jbomf6uoq1xefqp31aqk9ygc12k == \0\l\n\l\i\k\s\r\t\p\m\k\z\8\d\2\w\e\7\3\f\o\r\s\u\b\b\3\9\w\2\a\l\3\y\y\3\y\r\h\l\p\9\f\x\g\q\h\9\3\p\9\f\w\n\z\e\e\1\a\l\6\w\5\r\u\0\l\i\n\a\1\j\o\e\g\0\j\d\x\7\d\1\u\v\b\b\r\o\f\5\x\c\x\j\d\m\u\z\c\9\9\g\l\y\0\y\g\6\x\9\u\n\v\i\q\v\1\t\2\t\l\m\o\h\l\u\9\t\r\6\j\p\n\1\d\2\l\r\n\6\x\x\c\n\s\k\q\d\d\m\4\v\v\b\w\0\h\z\3\5\3\1\9\j\g\a\8\7\g\8\f\i\x\3\4\r\f\k\f\8\g\p\y\1\i\s\1\b\l\s\y\v\7\3\g\g\h\f\o\7\o\y\l\3\i\e\5\f\x\f\0\v\f\s\9\t\g\4\2\2\x\5\o\3\p\s\c\q\a\2\c\h\w\i\e\i\e\1\h\1\2\5\g\i\d\8\w\b\k\v\3\s\h\z\i\5\m\0\l\s\g\j\k\5\f\u\3\d\a\i\a\q\i\x\l\s\u\n\x\p\6\o\q\f\s\i\p\p\y\7\l\9\o\g\c\a\5\x\r\u\r\a\y\4\k\v\j\t\q\v\6\3\u\w\k\k\b\6\z\8\f\b\k\0\y\7\l\e\t\e\0\s\f\i\u\z\a\k\s\8\5\q\1\1\s\v\v\i\v\o\s\u\u\y\d\i\z\1\5\0\6\n\c\x\r\x\f\b\u\j\v\z\2\9\0\w\x\6\t\f\7\x\8\k\u\j\m\d\c\i\5\e\8\g\p\c\u\y\y\5\d\o\q\s\c\t\0\3\8\l\g\q\9\6\i\s\t\8\1\b\z\m\v\z\h\1\0\p\g\u\2\t\8\k\m\y\4\2\0\1\s\1\2\t\l\v\g\k\k\w\6\g\d\n\x\e\5\s\6\l\k\d\h\r\o\5\0\f\6\2\s\1\a\4\r\8\w\c\i\p\8\7\7\8\j\b\o\m\f\6\u\o\q\1\x\e\f\q\p\3\1\a\q\k\9\y\g\c\1\2\k ]] 00:41:33.577 01:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:33.577 01:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:41:33.577 [2024-07-25 01:07:56.102705] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:33.577 [2024-07-25 01:07:56.102904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166542 ] 00:41:33.835 [2024-07-25 01:07:56.275752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:33.835 [2024-07-25 01:07:56.459354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:35.781  Copying: 512/512 [B] (average 500 kBps) 00:41:35.781 00:41:35.781 01:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0lnliksrtpmkz8d2we73forsubb39w2al3yy3yrhlp9fxgqh93p9fwnzee1al6w5ru0lina1joeg0jdx7d1uvbbrof5xcxjdmuzc99gly0yg6x9unviqv1t2tlmohlu9tr6jpn1d2lrn6xxcnskqddm4vvbw0hz35319jga87g8fix34rfkf8gpy1is1blsyv73gghfo7oyl3ie5fxf0vfs9tg422x5o3pscqa2chwieie1h125gid8wbkv3shzi5m0lsgjk5fu3daiaqixlsunxp6oqfsippy7l9ogca5xruray4kvjtqv63uwkkb6z8fbk0y7lete0sfiuzaks85q11svvivosuuydiz1506ncxrxfbujvz290wx6tf7x8kujmdci5e8gpcuyy5doqsct038lgq96ist81bzmvzh10pgu2t8kmy4201s12tlvgkkw6gdnxe5s6lkdhro50f62s1a4r8wcip8778jbomf6uoq1xefqp31aqk9ygc12k == \0\l\n\l\i\k\s\r\t\p\m\k\z\8\d\2\w\e\7\3\f\o\r\s\u\b\b\3\9\w\2\a\l\3\y\y\3\y\r\h\l\p\9\f\x\g\q\h\9\3\p\9\f\w\n\z\e\e\1\a\l\6\w\5\r\u\0\l\i\n\a\1\j\o\e\g\0\j\d\x\7\d\1\u\v\b\b\r\o\f\5\x\c\x\j\d\m\u\z\c\9\9\g\l\y\0\y\g\6\x\9\u\n\v\i\q\v\1\t\2\t\l\m\o\h\l\u\9\t\r\6\j\p\n\1\d\2\l\r\n\6\x\x\c\n\s\k\q\d\d\m\4\v\v\b\w\0\h\z\3\5\3\1\9\j\g\a\8\7\g\8\f\i\x\3\4\r\f\k\f\8\g\p\y\1\i\s\1\b\l\s\y\v\7\3\g\g\h\f\o\7\o\y\l\3\i\e\5\f\x\f\0\v\f\s\9\t\g\4\2\2\x\5\o\3\p\s\c\q\a\2\c\h\w\i\e\i\e\1\h\1\2\5\g\i\d\8\w\b\k\v\3\s\h\z\i\5\m\0\l\s\g\j\k\5\f\u\3\d\a\i\a\q\i\x\l\s\u\n\x\p\6\o\q\f\s\i\p\p\y\7\l\9\o\g\c\a\5\x\r\u\r\a\y\4\k\v\j\t\q\v\6\3\u\w\k\k\b\6\z\8\f\b\k\0\y\7\l\e\t\e\0\s\f\i\u\z\a\k\s\8\5\q\1\1\s\v\v\i\v\o\s\u\u\y\d\i\z\1\5\0\6\n\c\x\r\x\f\b\u\j\v\z\2\9\0\w\x\6\t\f\7\x\8\k\u\j\m\d\c\i\5\e\8\g\p\c\u\y\y\5\d\o\q\s\c\t\0\3\8\l\g\q\9\6\i\s\t\8\1\b\z\m\v\z\h\1\0\p\g\u\2\t\8\k\m\y\4\2\0\1\s\1\2\t\l\v\g\k\k\w\6\g\d\n\x\e\5\s\6\l\k\d\h\r\o\5\0\f\6\2\s\1\a\4\r\8\w\c\i\p\8\7\7\8\j\b\o\m\f\6\u\o\q\1\x\e\f\q\p\3\1\a\q\k\9\y\g\c\1\2\k ]] 00:41:35.781 01:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:35.781 01:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:41:35.781 [2024-07-25 01:07:58.119531] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:35.781 [2024-07-25 01:07:58.119688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166567 ] 00:41:35.781 [2024-07-25 01:07:58.275411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:36.040 [2024-07-25 01:07:58.465280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:37.677  Copying: 512/512 [B] (average 250 kBps) 00:41:37.677 00:41:37.677 01:08:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0lnliksrtpmkz8d2we73forsubb39w2al3yy3yrhlp9fxgqh93p9fwnzee1al6w5ru0lina1joeg0jdx7d1uvbbrof5xcxjdmuzc99gly0yg6x9unviqv1t2tlmohlu9tr6jpn1d2lrn6xxcnskqddm4vvbw0hz35319jga87g8fix34rfkf8gpy1is1blsyv73gghfo7oyl3ie5fxf0vfs9tg422x5o3pscqa2chwieie1h125gid8wbkv3shzi5m0lsgjk5fu3daiaqixlsunxp6oqfsippy7l9ogca5xruray4kvjtqv63uwkkb6z8fbk0y7lete0sfiuzaks85q11svvivosuuydiz1506ncxrxfbujvz290wx6tf7x8kujmdci5e8gpcuyy5doqsct038lgq96ist81bzmvzh10pgu2t8kmy4201s12tlvgkkw6gdnxe5s6lkdhro50f62s1a4r8wcip8778jbomf6uoq1xefqp31aqk9ygc12k == \0\l\n\l\i\k\s\r\t\p\m\k\z\8\d\2\w\e\7\3\f\o\r\s\u\b\b\3\9\w\2\a\l\3\y\y\3\y\r\h\l\p\9\f\x\g\q\h\9\3\p\9\f\w\n\z\e\e\1\a\l\6\w\5\r\u\0\l\i\n\a\1\j\o\e\g\0\j\d\x\7\d\1\u\v\b\b\r\o\f\5\x\c\x\j\d\m\u\z\c\9\9\g\l\y\0\y\g\6\x\9\u\n\v\i\q\v\1\t\2\t\l\m\o\h\l\u\9\t\r\6\j\p\n\1\d\2\l\r\n\6\x\x\c\n\s\k\q\d\d\m\4\v\v\b\w\0\h\z\3\5\3\1\9\j\g\a\8\7\g\8\f\i\x\3\4\r\f\k\f\8\g\p\y\1\i\s\1\b\l\s\y\v\7\3\g\g\h\f\o\7\o\y\l\3\i\e\5\f\x\f\0\v\f\s\9\t\g\4\2\2\x\5\o\3\p\s\c\q\a\2\c\h\w\i\e\i\e\1\h\1\2\5\g\i\d\8\w\b\k\v\3\s\h\z\i\5\m\0\l\s\g\j\k\5\f\u\3\d\a\i\a\q\i\x\l\s\u\n\x\p\6\o\q\f\s\i\p\p\y\7\l\9\o\g\c\a\5\x\r\u\r\a\y\4\k\v\j\t\q\v\6\3\u\w\k\k\b\6\z\8\f\b\k\0\y\7\l\e\t\e\0\s\f\i\u\z\a\k\s\8\5\q\1\1\s\v\v\i\v\o\s\u\u\y\d\i\z\1\5\0\6\n\c\x\r\x\f\b\u\j\v\z\2\9\0\w\x\6\t\f\7\x\8\k\u\j\m\d\c\i\5\e\8\g\p\c\u\y\y\5\d\o\q\s\c\t\0\3\8\l\g\q\9\6\i\s\t\8\1\b\z\m\v\z\h\1\0\p\g\u\2\t\8\k\m\y\4\2\0\1\s\1\2\t\l\v\g\k\k\w\6\g\d\n\x\e\5\s\6\l\k\d\h\r\o\5\0\f\6\2\s\1\a\4\r\8\w\c\i\p\8\7\7\8\j\b\o\m\f\6\u\o\q\1\x\e\f\q\p\3\1\a\q\k\9\y\g\c\1\2\k ]] 00:41:37.677 01:08:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:37.677 01:08:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:41:37.677 [2024-07-25 01:08:00.133342] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:37.677 [2024-07-25 01:08:00.133559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166590 ] 00:41:37.677 [2024-07-25 01:08:00.311820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:37.937 [2024-07-25 01:08:00.503265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:39.574  Copying: 512/512 [B] (average 166 kBps) 00:41:39.574 00:41:39.574 01:08:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0lnliksrtpmkz8d2we73forsubb39w2al3yy3yrhlp9fxgqh93p9fwnzee1al6w5ru0lina1joeg0jdx7d1uvbbrof5xcxjdmuzc99gly0yg6x9unviqv1t2tlmohlu9tr6jpn1d2lrn6xxcnskqddm4vvbw0hz35319jga87g8fix34rfkf8gpy1is1blsyv73gghfo7oyl3ie5fxf0vfs9tg422x5o3pscqa2chwieie1h125gid8wbkv3shzi5m0lsgjk5fu3daiaqixlsunxp6oqfsippy7l9ogca5xruray4kvjtqv63uwkkb6z8fbk0y7lete0sfiuzaks85q11svvivosuuydiz1506ncxrxfbujvz290wx6tf7x8kujmdci5e8gpcuyy5doqsct038lgq96ist81bzmvzh10pgu2t8kmy4201s12tlvgkkw6gdnxe5s6lkdhro50f62s1a4r8wcip8778jbomf6uoq1xefqp31aqk9ygc12k == \0\l\n\l\i\k\s\r\t\p\m\k\z\8\d\2\w\e\7\3\f\o\r\s\u\b\b\3\9\w\2\a\l\3\y\y\3\y\r\h\l\p\9\f\x\g\q\h\9\3\p\9\f\w\n\z\e\e\1\a\l\6\w\5\r\u\0\l\i\n\a\1\j\o\e\g\0\j\d\x\7\d\1\u\v\b\b\r\o\f\5\x\c\x\j\d\m\u\z\c\9\9\g\l\y\0\y\g\6\x\9\u\n\v\i\q\v\1\t\2\t\l\m\o\h\l\u\9\t\r\6\j\p\n\1\d\2\l\r\n\6\x\x\c\n\s\k\q\d\d\m\4\v\v\b\w\0\h\z\3\5\3\1\9\j\g\a\8\7\g\8\f\i\x\3\4\r\f\k\f\8\g\p\y\1\i\s\1\b\l\s\y\v\7\3\g\g\h\f\o\7\o\y\l\3\i\e\5\f\x\f\0\v\f\s\9\t\g\4\2\2\x\5\o\3\p\s\c\q\a\2\c\h\w\i\e\i\e\1\h\1\2\5\g\i\d\8\w\b\k\v\3\s\h\z\i\5\m\0\l\s\g\j\k\5\f\u\3\d\a\i\a\q\i\x\l\s\u\n\x\p\6\o\q\f\s\i\p\p\y\7\l\9\o\g\c\a\5\x\r\u\r\a\y\4\k\v\j\t\q\v\6\3\u\w\k\k\b\6\z\8\f\b\k\0\y\7\l\e\t\e\0\s\f\i\u\z\a\k\s\8\5\q\1\1\s\v\v\i\v\o\s\u\u\y\d\i\z\1\5\0\6\n\c\x\r\x\f\b\u\j\v\z\2\9\0\w\x\6\t\f\7\x\8\k\u\j\m\d\c\i\5\e\8\g\p\c\u\y\y\5\d\o\q\s\c\t\0\3\8\l\g\q\9\6\i\s\t\8\1\b\z\m\v\z\h\1\0\p\g\u\2\t\8\k\m\y\4\2\0\1\s\1\2\t\l\v\g\k\k\w\6\g\d\n\x\e\5\s\6\l\k\d\h\r\o\5\0\f\6\2\s\1\a\4\r\8\w\c\i\p\8\7\7\8\j\b\o\m\f\6\u\o\q\1\x\e\f\q\p\3\1\a\q\k\9\y\g\c\1\2\k ]] 00:41:39.574 01:08:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:41:39.574 01:08:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:41:39.574 01:08:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:41:39.574 01:08:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:41:39.574 01:08:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:39.574 01:08:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:41:39.574 [2024-07-25 01:08:02.218506] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:39.574 [2024-07-25 01:08:02.218721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166620 ] 00:41:39.832 [2024-07-25 01:08:02.396100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:40.091 [2024-07-25 01:08:02.583233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:41.728  Copying: 512/512 [B] (average 500 kBps) 00:41:41.728 00:41:41.729 01:08:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x7m9hlnr8o0p7eonac3bklw7w3u2icpafehwes8tl15tt9uv40g8bd8sltl0u06sdm026fv018uysz8ywxs2wiqbebynb7nu886xyocjgbxhrbaldk41ozjfvpc9yddzwpr5w5ggu1bllyawi1kuv467zggc6uns6lmhcz27sl5d35yszdfibyp2ri13kbpyz4kyxsl2j8b3t36pje7fvnx5gqaxelgruwk97742d3yngj03bu8k48wd0emxh2co7p7unlgkzambd47k2m901nnieu37h41f047htxe79tpe0l3b7zgsrygompykp0h172hpamkngxgyo1u1vq7elxitg03lq0arv55gay35xpqiupcoqux9gbi3t8nl30rgf0qsd65rg1lh8lgl5klmsv0e7bp9a77j30az2wpslt4r7c5ingzwev555id8v3xkn8c0oak6tfdqakv2s9axauhlr3xadjzxqlau28ynqujjxajj24ggd47l87o3796d == \x\7\m\9\h\l\n\r\8\o\0\p\7\e\o\n\a\c\3\b\k\l\w\7\w\3\u\2\i\c\p\a\f\e\h\w\e\s\8\t\l\1\5\t\t\9\u\v\4\0\g\8\b\d\8\s\l\t\l\0\u\0\6\s\d\m\0\2\6\f\v\0\1\8\u\y\s\z\8\y\w\x\s\2\w\i\q\b\e\b\y\n\b\7\n\u\8\8\6\x\y\o\c\j\g\b\x\h\r\b\a\l\d\k\4\1\o\z\j\f\v\p\c\9\y\d\d\z\w\p\r\5\w\5\g\g\u\1\b\l\l\y\a\w\i\1\k\u\v\4\6\7\z\g\g\c\6\u\n\s\6\l\m\h\c\z\2\7\s\l\5\d\3\5\y\s\z\d\f\i\b\y\p\2\r\i\1\3\k\b\p\y\z\4\k\y\x\s\l\2\j\8\b\3\t\3\6\p\j\e\7\f\v\n\x\5\g\q\a\x\e\l\g\r\u\w\k\9\7\7\4\2\d\3\y\n\g\j\0\3\b\u\8\k\4\8\w\d\0\e\m\x\h\2\c\o\7\p\7\u\n\l\g\k\z\a\m\b\d\4\7\k\2\m\9\0\1\n\n\i\e\u\3\7\h\4\1\f\0\4\7\h\t\x\e\7\9\t\p\e\0\l\3\b\7\z\g\s\r\y\g\o\m\p\y\k\p\0\h\1\7\2\h\p\a\m\k\n\g\x\g\y\o\1\u\1\v\q\7\e\l\x\i\t\g\0\3\l\q\0\a\r\v\5\5\g\a\y\3\5\x\p\q\i\u\p\c\o\q\u\x\9\g\b\i\3\t\8\n\l\3\0\r\g\f\0\q\s\d\6\5\r\g\1\l\h\8\l\g\l\5\k\l\m\s\v\0\e\7\b\p\9\a\7\7\j\3\0\a\z\2\w\p\s\l\t\4\r\7\c\5\i\n\g\z\w\e\v\5\5\5\i\d\8\v\3\x\k\n\8\c\0\o\a\k\6\t\f\d\q\a\k\v\2\s\9\a\x\a\u\h\l\r\3\x\a\d\j\z\x\q\l\a\u\2\8\y\n\q\u\j\j\x\a\j\j\2\4\g\g\d\4\7\l\8\7\o\3\7\9\6\d ]] 00:41:41.729 01:08:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:41.729 01:08:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:41:41.729 [2024-07-25 01:08:04.269926] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:41.729 [2024-07-25 01:08:04.270143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166651 ] 00:41:41.988 [2024-07-25 01:08:04.446261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:41.988 [2024-07-25 01:08:04.628475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:43.932  Copying: 512/512 [B] (average 500 kBps) 00:41:43.932 00:41:43.932 01:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x7m9hlnr8o0p7eonac3bklw7w3u2icpafehwes8tl15tt9uv40g8bd8sltl0u06sdm026fv018uysz8ywxs2wiqbebynb7nu886xyocjgbxhrbaldk41ozjfvpc9yddzwpr5w5ggu1bllyawi1kuv467zggc6uns6lmhcz27sl5d35yszdfibyp2ri13kbpyz4kyxsl2j8b3t36pje7fvnx5gqaxelgruwk97742d3yngj03bu8k48wd0emxh2co7p7unlgkzambd47k2m901nnieu37h41f047htxe79tpe0l3b7zgsrygompykp0h172hpamkngxgyo1u1vq7elxitg03lq0arv55gay35xpqiupcoqux9gbi3t8nl30rgf0qsd65rg1lh8lgl5klmsv0e7bp9a77j30az2wpslt4r7c5ingzwev555id8v3xkn8c0oak6tfdqakv2s9axauhlr3xadjzxqlau28ynqujjxajj24ggd47l87o3796d == \x\7\m\9\h\l\n\r\8\o\0\p\7\e\o\n\a\c\3\b\k\l\w\7\w\3\u\2\i\c\p\a\f\e\h\w\e\s\8\t\l\1\5\t\t\9\u\v\4\0\g\8\b\d\8\s\l\t\l\0\u\0\6\s\d\m\0\2\6\f\v\0\1\8\u\y\s\z\8\y\w\x\s\2\w\i\q\b\e\b\y\n\b\7\n\u\8\8\6\x\y\o\c\j\g\b\x\h\r\b\a\l\d\k\4\1\o\z\j\f\v\p\c\9\y\d\d\z\w\p\r\5\w\5\g\g\u\1\b\l\l\y\a\w\i\1\k\u\v\4\6\7\z\g\g\c\6\u\n\s\6\l\m\h\c\z\2\7\s\l\5\d\3\5\y\s\z\d\f\i\b\y\p\2\r\i\1\3\k\b\p\y\z\4\k\y\x\s\l\2\j\8\b\3\t\3\6\p\j\e\7\f\v\n\x\5\g\q\a\x\e\l\g\r\u\w\k\9\7\7\4\2\d\3\y\n\g\j\0\3\b\u\8\k\4\8\w\d\0\e\m\x\h\2\c\o\7\p\7\u\n\l\g\k\z\a\m\b\d\4\7\k\2\m\9\0\1\n\n\i\e\u\3\7\h\4\1\f\0\4\7\h\t\x\e\7\9\t\p\e\0\l\3\b\7\z\g\s\r\y\g\o\m\p\y\k\p\0\h\1\7\2\h\p\a\m\k\n\g\x\g\y\o\1\u\1\v\q\7\e\l\x\i\t\g\0\3\l\q\0\a\r\v\5\5\g\a\y\3\5\x\p\q\i\u\p\c\o\q\u\x\9\g\b\i\3\t\8\n\l\3\0\r\g\f\0\q\s\d\6\5\r\g\1\l\h\8\l\g\l\5\k\l\m\s\v\0\e\7\b\p\9\a\7\7\j\3\0\a\z\2\w\p\s\l\t\4\r\7\c\5\i\n\g\z\w\e\v\5\5\5\i\d\8\v\3\x\k\n\8\c\0\o\a\k\6\t\f\d\q\a\k\v\2\s\9\a\x\a\u\h\l\r\3\x\a\d\j\z\x\q\l\a\u\2\8\y\n\q\u\j\j\x\a\j\j\2\4\g\g\d\4\7\l\8\7\o\3\7\9\6\d ]] 00:41:43.932 01:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:43.932 01:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:41:43.932 [2024-07-25 01:08:06.314747] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:43.932 [2024-07-25 01:08:06.314911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166676 ] 00:41:43.932 [2024-07-25 01:08:06.473701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:44.193 [2024-07-25 01:08:06.658293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.830  Copying: 512/512 [B] (average 250 kBps) 00:41:45.830 00:41:45.830 01:08:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x7m9hlnr8o0p7eonac3bklw7w3u2icpafehwes8tl15tt9uv40g8bd8sltl0u06sdm026fv018uysz8ywxs2wiqbebynb7nu886xyocjgbxhrbaldk41ozjfvpc9yddzwpr5w5ggu1bllyawi1kuv467zggc6uns6lmhcz27sl5d35yszdfibyp2ri13kbpyz4kyxsl2j8b3t36pje7fvnx5gqaxelgruwk97742d3yngj03bu8k48wd0emxh2co7p7unlgkzambd47k2m901nnieu37h41f047htxe79tpe0l3b7zgsrygompykp0h172hpamkngxgyo1u1vq7elxitg03lq0arv55gay35xpqiupcoqux9gbi3t8nl30rgf0qsd65rg1lh8lgl5klmsv0e7bp9a77j30az2wpslt4r7c5ingzwev555id8v3xkn8c0oak6tfdqakv2s9axauhlr3xadjzxqlau28ynqujjxajj24ggd47l87o3796d == \x\7\m\9\h\l\n\r\8\o\0\p\7\e\o\n\a\c\3\b\k\l\w\7\w\3\u\2\i\c\p\a\f\e\h\w\e\s\8\t\l\1\5\t\t\9\u\v\4\0\g\8\b\d\8\s\l\t\l\0\u\0\6\s\d\m\0\2\6\f\v\0\1\8\u\y\s\z\8\y\w\x\s\2\w\i\q\b\e\b\y\n\b\7\n\u\8\8\6\x\y\o\c\j\g\b\x\h\r\b\a\l\d\k\4\1\o\z\j\f\v\p\c\9\y\d\d\z\w\p\r\5\w\5\g\g\u\1\b\l\l\y\a\w\i\1\k\u\v\4\6\7\z\g\g\c\6\u\n\s\6\l\m\h\c\z\2\7\s\l\5\d\3\5\y\s\z\d\f\i\b\y\p\2\r\i\1\3\k\b\p\y\z\4\k\y\x\s\l\2\j\8\b\3\t\3\6\p\j\e\7\f\v\n\x\5\g\q\a\x\e\l\g\r\u\w\k\9\7\7\4\2\d\3\y\n\g\j\0\3\b\u\8\k\4\8\w\d\0\e\m\x\h\2\c\o\7\p\7\u\n\l\g\k\z\a\m\b\d\4\7\k\2\m\9\0\1\n\n\i\e\u\3\7\h\4\1\f\0\4\7\h\t\x\e\7\9\t\p\e\0\l\3\b\7\z\g\s\r\y\g\o\m\p\y\k\p\0\h\1\7\2\h\p\a\m\k\n\g\x\g\y\o\1\u\1\v\q\7\e\l\x\i\t\g\0\3\l\q\0\a\r\v\5\5\g\a\y\3\5\x\p\q\i\u\p\c\o\q\u\x\9\g\b\i\3\t\8\n\l\3\0\r\g\f\0\q\s\d\6\5\r\g\1\l\h\8\l\g\l\5\k\l\m\s\v\0\e\7\b\p\9\a\7\7\j\3\0\a\z\2\w\p\s\l\t\4\r\7\c\5\i\n\g\z\w\e\v\5\5\5\i\d\8\v\3\x\k\n\8\c\0\o\a\k\6\t\f\d\q\a\k\v\2\s\9\a\x\a\u\h\l\r\3\x\a\d\j\z\x\q\l\a\u\2\8\y\n\q\u\j\j\x\a\j\j\2\4\g\g\d\4\7\l\8\7\o\3\7\9\6\d ]] 00:41:45.830 01:08:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:45.830 01:08:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:41:45.830 [2024-07-25 01:08:08.359037] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:45.830 [2024-07-25 01:08:08.359537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166702 ] 00:41:46.088 [2024-07-25 01:08:08.555781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:46.347 [2024-07-25 01:08:08.747208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:47.981  Copying: 512/512 [B] (average 250 kBps) 00:41:47.981 00:41:47.981 ************************************ 00:41:47.981 END TEST dd_flags_misc 00:41:47.981 ************************************ 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x7m9hlnr8o0p7eonac3bklw7w3u2icpafehwes8tl15tt9uv40g8bd8sltl0u06sdm026fv018uysz8ywxs2wiqbebynb7nu886xyocjgbxhrbaldk41ozjfvpc9yddzwpr5w5ggu1bllyawi1kuv467zggc6uns6lmhcz27sl5d35yszdfibyp2ri13kbpyz4kyxsl2j8b3t36pje7fvnx5gqaxelgruwk97742d3yngj03bu8k48wd0emxh2co7p7unlgkzambd47k2m901nnieu37h41f047htxe79tpe0l3b7zgsrygompykp0h172hpamkngxgyo1u1vq7elxitg03lq0arv55gay35xpqiupcoqux9gbi3t8nl30rgf0qsd65rg1lh8lgl5klmsv0e7bp9a77j30az2wpslt4r7c5ingzwev555id8v3xkn8c0oak6tfdqakv2s9axauhlr3xadjzxqlau28ynqujjxajj24ggd47l87o3796d == \x\7\m\9\h\l\n\r\8\o\0\p\7\e\o\n\a\c\3\b\k\l\w\7\w\3\u\2\i\c\p\a\f\e\h\w\e\s\8\t\l\1\5\t\t\9\u\v\4\0\g\8\b\d\8\s\l\t\l\0\u\0\6\s\d\m\0\2\6\f\v\0\1\8\u\y\s\z\8\y\w\x\s\2\w\i\q\b\e\b\y\n\b\7\n\u\8\8\6\x\y\o\c\j\g\b\x\h\r\b\a\l\d\k\4\1\o\z\j\f\v\p\c\9\y\d\d\z\w\p\r\5\w\5\g\g\u\1\b\l\l\y\a\w\i\1\k\u\v\4\6\7\z\g\g\c\6\u\n\s\6\l\m\h\c\z\2\7\s\l\5\d\3\5\y\s\z\d\f\i\b\y\p\2\r\i\1\3\k\b\p\y\z\4\k\y\x\s\l\2\j\8\b\3\t\3\6\p\j\e\7\f\v\n\x\5\g\q\a\x\e\l\g\r\u\w\k\9\7\7\4\2\d\3\y\n\g\j\0\3\b\u\8\k\4\8\w\d\0\e\m\x\h\2\c\o\7\p\7\u\n\l\g\k\z\a\m\b\d\4\7\k\2\m\9\0\1\n\n\i\e\u\3\7\h\4\1\f\0\4\7\h\t\x\e\7\9\t\p\e\0\l\3\b\7\z\g\s\r\y\g\o\m\p\y\k\p\0\h\1\7\2\h\p\a\m\k\n\g\x\g\y\o\1\u\1\v\q\7\e\l\x\i\t\g\0\3\l\q\0\a\r\v\5\5\g\a\y\3\5\x\p\q\i\u\p\c\o\q\u\x\9\g\b\i\3\t\8\n\l\3\0\r\g\f\0\q\s\d\6\5\r\g\1\l\h\8\l\g\l\5\k\l\m\s\v\0\e\7\b\p\9\a\7\7\j\3\0\a\z\2\w\p\s\l\t\4\r\7\c\5\i\n\g\z\w\e\v\5\5\5\i\d\8\v\3\x\k\n\8\c\0\o\a\k\6\t\f\d\q\a\k\v\2\s\9\a\x\a\u\h\l\r\3\x\a\d\j\z\x\q\l\a\u\2\8\y\n\q\u\j\j\x\a\j\j\2\4\g\g\d\4\7\l\8\7\o\3\7\9\6\d ]] 00:41:47.981 00:41:47.981 real 0m16.349s 00:41:47.981 user 0m13.529s 00:41:47.981 sys 0m1.768s 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:41:47.981 * Second test run, using AIO 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:47.981 ************************************ 00:41:47.981 START TEST dd_flag_append_forced_aio 00:41:47.981 ************************************ 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:47.981 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:47.982 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=4to19czyw36i0hzjt8x7uhgv8np8w5yq 00:41:47.982 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:41:47.982 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:47.982 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:47.982 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=h6v1uf89ir0pjggm4bzvr6i4k0ctq9w5 00:41:47.982 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 4to19czyw36i0hzjt8x7uhgv8np8w5yq 00:41:47.982 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s h6v1uf89ir0pjggm4bzvr6i4k0ctq9w5 00:41:47.982 01:08:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:41:47.982 [2024-07-25 01:08:10.542783] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:47.982 [2024-07-25 01:08:10.543204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166750 ] 00:41:48.240 [2024-07-25 01:08:10.726920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:48.497 [2024-07-25 01:08:10.913317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:50.129  Copying: 32/32 [B] (average 31 kBps) 00:41:50.129 00:41:50.129 ************************************ 00:41:50.129 END TEST dd_flag_append_forced_aio 00:41:50.129 ************************************ 00:41:50.129 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ h6v1uf89ir0pjggm4bzvr6i4k0ctq9w54to19czyw36i0hzjt8x7uhgv8np8w5yq == \h\6\v\1\u\f\8\9\i\r\0\p\j\g\g\m\4\b\z\v\r\6\i\4\k\0\c\t\q\9\w\5\4\t\o\1\9\c\z\y\w\3\6\i\0\h\z\j\t\8\x\7\u\h\g\v\8\n\p\8\w\5\y\q ]] 00:41:50.129 00:41:50.129 real 0m2.080s 00:41:50.129 user 0m1.695s 00:41:50.129 sys 0m0.250s 00:41:50.129 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:50.129 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:50.129 01:08:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:41:50.129 01:08:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:50.129 01:08:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:50.129 01:08:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:50.129 ************************************ 00:41:50.129 START TEST dd_flag_directory_forced_aio 00:41:50.129 ************************************ 00:41:50.129 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:41:50.129 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:50.129 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:41:50.129 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:50.130 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:50.130 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:50.130 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:50.130 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:50.130 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:50.130 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:50.130 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:50.130 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:50.130 01:08:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:50.130 [2024-07-25 01:08:12.679480] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:50.130 [2024-07-25 01:08:12.679861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166804 ] 00:41:50.388 [2024-07-25 01:08:12.859882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:50.647 [2024-07-25 01:08:13.047428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:50.907 [2024-07-25 01:08:13.335229] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:50.907 [2024-07-25 01:08:13.335587] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:50.907 [2024-07-25 01:08:13.335673] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:51.841 [2024-07-25 01:08:14.183254] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:52.099 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:41:52.099 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:52.099 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:52.100 01:08:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:52.100 [2024-07-25 01:08:14.691612] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:52.100 [2024-07-25 01:08:14.691962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166833 ] 00:41:52.358 [2024-07-25 01:08:14.848866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:52.617 [2024-07-25 01:08:15.034639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:52.876 [2024-07-25 01:08:15.328189] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:52.876 [2024-07-25 01:08:15.328460] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:52.876 [2024-07-25 01:08:15.328536] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:53.813 [2024-07-25 01:08:16.160615] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:41:54.072 ************************************ 00:41:54.072 END TEST dd_flag_directory_forced_aio 00:41:54.072 ************************************ 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:54.072 00:41:54.072 real 0m4.009s 00:41:54.072 user 0m3.329s 00:41:54.072 sys 0m0.474s 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:54.072 ************************************ 00:41:54.072 START TEST dd_flag_nofollow_forced_aio 00:41:54.072 ************************************ 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:54.072 01:08:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:54.332 [2024-07-25 01:08:16.751156] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:54.332 [2024-07-25 01:08:16.751375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166884 ] 00:41:54.332 [2024-07-25 01:08:16.930687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:54.591 [2024-07-25 01:08:17.116766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.850 [2024-07-25 01:08:17.400558] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:41:54.850 [2024-07-25 01:08:17.400667] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:41:54.850 [2024-07-25 01:08:17.400698] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:55.787 [2024-07-25 01:08:18.234979] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:56.046 01:08:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:56.305 [2024-07-25 01:08:18.752355] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:56.305 [2024-07-25 01:08:18.752515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166912 ] 00:41:56.305 [2024-07-25 01:08:18.911067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:56.564 [2024-07-25 01:08:19.091147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:56.822 [2024-07-25 01:08:19.371250] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:41:56.822 [2024-07-25 01:08:19.371333] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:41:56.822 [2024-07-25 01:08:19.371365] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:57.759 [2024-07-25 01:08:20.180795] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:58.018 01:08:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:41:58.018 01:08:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:58.018 01:08:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:41:58.018 01:08:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:41:58.018 01:08:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:41:58.018 01:08:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:58.018 01:08:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:41:58.018 01:08:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:58.018 01:08:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:58.018 01:08:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:58.277 [2024-07-25 01:08:20.686777] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:41:58.277 [2024-07-25 01:08:20.686931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166934 ] 00:41:58.277 [2024-07-25 01:08:20.845437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:58.536 [2024-07-25 01:08:21.035928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:00.181  Copying: 512/512 [B] (average 500 kBps) 00:42:00.181 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ g3z53c6kn6hk8kk3jc8tbccue2rd6t8dqulch5kcg9fgo6omk3yjfk1m24p9e70d8nwhgx10qmmab7l0ehsqlk20kbkzqc6w5hx1xvzjupj6gnur8re6sgfej2xe0lc29gh811g4hsc0m839tbb36f0cywjag4hmjwofbty663af3y7llpw4pu9yr9m5960ccpb0zh49eyhmyd0t82l7k2119jedyo2969mmxrs5xojzb32c2odb4xmarobpg1uxaui5yjrvg4oocf04yo1gv720jtcuzbv4bid5fj2aqrlxrgl3xjz7k0sbfm9z29w96y6iaj7vxjvy1p1y3r5yj8iailrdswcyq833l27y41gaim7bi4ywg9geuui3ajub7c6o716euwyhhmioiul4zhjsc7pnvkh1lu6913iwmzsn89rufk6emxodiyqnd6vsk2t67d38jrs8vslspnv9tfb8t2sg635k5ssi3yz6vfurl3eishu0jmbi51wplb86 == \g\3\z\5\3\c\6\k\n\6\h\k\8\k\k\3\j\c\8\t\b\c\c\u\e\2\r\d\6\t\8\d\q\u\l\c\h\5\k\c\g\9\f\g\o\6\o\m\k\3\y\j\f\k\1\m\2\4\p\9\e\7\0\d\8\n\w\h\g\x\1\0\q\m\m\a\b\7\l\0\e\h\s\q\l\k\2\0\k\b\k\z\q\c\6\w\5\h\x\1\x\v\z\j\u\p\j\6\g\n\u\r\8\r\e\6\s\g\f\e\j\2\x\e\0\l\c\2\9\g\h\8\1\1\g\4\h\s\c\0\m\8\3\9\t\b\b\3\6\f\0\c\y\w\j\a\g\4\h\m\j\w\o\f\b\t\y\6\6\3\a\f\3\y\7\l\l\p\w\4\p\u\9\y\r\9\m\5\9\6\0\c\c\p\b\0\z\h\4\9\e\y\h\m\y\d\0\t\8\2\l\7\k\2\1\1\9\j\e\d\y\o\2\9\6\9\m\m\x\r\s\5\x\o\j\z\b\3\2\c\2\o\d\b\4\x\m\a\r\o\b\p\g\1\u\x\a\u\i\5\y\j\r\v\g\4\o\o\c\f\0\4\y\o\1\g\v\7\2\0\j\t\c\u\z\b\v\4\b\i\d\5\f\j\2\a\q\r\l\x\r\g\l\3\x\j\z\7\k\0\s\b\f\m\9\z\2\9\w\9\6\y\6\i\a\j\7\v\x\j\v\y\1\p\1\y\3\r\5\y\j\8\i\a\i\l\r\d\s\w\c\y\q\8\3\3\l\2\7\y\4\1\g\a\i\m\7\b\i\4\y\w\g\9\g\e\u\u\i\3\a\j\u\b\7\c\6\o\7\1\6\e\u\w\y\h\h\m\i\o\i\u\l\4\z\h\j\s\c\7\p\n\v\k\h\1\l\u\6\9\1\3\i\w\m\z\s\n\8\9\r\u\f\k\6\e\m\x\o\d\i\y\q\n\d\6\v\s\k\2\t\6\7\d\3\8\j\r\s\8\v\s\l\s\p\n\v\9\t\f\b\8\t\2\s\g\6\3\5\k\5\s\s\i\3\y\z\6\v\f\u\r\l\3\e\i\s\h\u\0\j\m\b\i\5\1\w\p\l\b\8\6 ]] 00:42:00.181 00:42:00.181 real 0m5.957s 00:42:00.181 user 0m4.949s 00:42:00.181 sys 0m0.679s 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:42:00.181 ************************************ 00:42:00.181 END TEST dd_flag_nofollow_forced_aio 00:42:00.181 ************************************ 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:42:00.181 ************************************ 00:42:00.181 START TEST dd_flag_noatime_forced_aio 00:42:00.181 ************************************ 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721869701 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721869702 00:42:00.181 01:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:42:01.117 01:08:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:01.375 [2024-07-25 01:08:23.783750] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:01.375 [2024-07-25 01:08:23.783978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167004 ] 00:42:01.375 [2024-07-25 01:08:23.962950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:01.634 [2024-07-25 01:08:24.137990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:03.270  Copying: 512/512 [B] (average 500 kBps) 00:42:03.270 00:42:03.270 01:08:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:03.270 01:08:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721869701 )) 00:42:03.270 01:08:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:03.270 01:08:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721869702 )) 00:42:03.270 01:08:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:03.270 [2024-07-25 01:08:25.786460] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:03.270 [2024-07-25 01:08:25.786610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167031 ] 00:42:03.529 [2024-07-25 01:08:25.942803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:03.529 [2024-07-25 01:08:26.117447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:05.166  Copying: 512/512 [B] (average 500 kBps) 00:42:05.166 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721869706 )) 00:42:05.166 00:42:05.166 real 0m5.040s 00:42:05.166 user 0m3.335s 00:42:05.166 sys 0m0.446s 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:42:05.166 ************************************ 00:42:05.166 END TEST dd_flag_noatime_forced_aio 00:42:05.166 ************************************ 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:42:05.166 ************************************ 00:42:05.166 START TEST dd_flags_misc_forced_aio 00:42:05.166 ************************************ 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:42:05.166 01:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:42:05.167 01:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:42:05.425 [2024-07-25 01:08:27.855139] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:05.425 [2024-07-25 01:08:27.855348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167082 ] 00:42:05.425 [2024-07-25 01:08:28.035991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:05.684 [2024-07-25 01:08:28.209858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:07.356  Copying: 512/512 [B] (average 500 kBps) 00:42:07.356 00:42:07.356 01:08:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dhwboxuap6fxc071eru07msqu90gme9hehzc08yr6c6uajfa19thjric2d061aisp1lmb8yzweron2j33lzqbt5pw4037zdjxplatq1zq2awp580ep0g6p1b86f2k77sysc8s862v3j6hk0n3kls2e17malz9ek932vjk9fkikshjl2faz28kkjoppbnfxjfb6rjiagse5r44ldzxgskppmnikrjovcbokepjvy5kekmffoprb4h1h33h10kclgsigp5alr3ukeszwrnun8mh0csryyq7038ck9dnhrwjmhtglbmh630cxy6rinerg6jncmsuse3wzzppuo4ujl1ys9ulmea6alyw2wasw03xizghwfrnxk636otgazvoe6995ie9etzowbo0x4b75atfkfoc3yrluq8lm6el0pw1330xpublwapl8crq0gycjy5k2av8bag93bf7rktxsbrgayx84qj9g8acfz51oro0zfissr4ejv38ijr6y88houo == \d\h\w\b\o\x\u\a\p\6\f\x\c\0\7\1\e\r\u\0\7\m\s\q\u\9\0\g\m\e\9\h\e\h\z\c\0\8\y\r\6\c\6\u\a\j\f\a\1\9\t\h\j\r\i\c\2\d\0\6\1\a\i\s\p\1\l\m\b\8\y\z\w\e\r\o\n\2\j\3\3\l\z\q\b\t\5\p\w\4\0\3\7\z\d\j\x\p\l\a\t\q\1\z\q\2\a\w\p\5\8\0\e\p\0\g\6\p\1\b\8\6\f\2\k\7\7\s\y\s\c\8\s\8\6\2\v\3\j\6\h\k\0\n\3\k\l\s\2\e\1\7\m\a\l\z\9\e\k\9\3\2\v\j\k\9\f\k\i\k\s\h\j\l\2\f\a\z\2\8\k\k\j\o\p\p\b\n\f\x\j\f\b\6\r\j\i\a\g\s\e\5\r\4\4\l\d\z\x\g\s\k\p\p\m\n\i\k\r\j\o\v\c\b\o\k\e\p\j\v\y\5\k\e\k\m\f\f\o\p\r\b\4\h\1\h\3\3\h\1\0\k\c\l\g\s\i\g\p\5\a\l\r\3\u\k\e\s\z\w\r\n\u\n\8\m\h\0\c\s\r\y\y\q\7\0\3\8\c\k\9\d\n\h\r\w\j\m\h\t\g\l\b\m\h\6\3\0\c\x\y\6\r\i\n\e\r\g\6\j\n\c\m\s\u\s\e\3\w\z\z\p\p\u\o\4\u\j\l\1\y\s\9\u\l\m\e\a\6\a\l\y\w\2\w\a\s\w\0\3\x\i\z\g\h\w\f\r\n\x\k\6\3\6\o\t\g\a\z\v\o\e\6\9\9\5\i\e\9\e\t\z\o\w\b\o\0\x\4\b\7\5\a\t\f\k\f\o\c\3\y\r\l\u\q\8\l\m\6\e\l\0\p\w\1\3\3\0\x\p\u\b\l\w\a\p\l\8\c\r\q\0\g\y\c\j\y\5\k\2\a\v\8\b\a\g\9\3\b\f\7\r\k\t\x\s\b\r\g\a\y\x\8\4\q\j\9\g\8\a\c\f\z\5\1\o\r\o\0\z\f\i\s\s\r\4\e\j\v\3\8\i\j\r\6\y\8\8\h\o\u\o ]] 00:42:07.356 01:08:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:42:07.356 01:08:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:42:07.356 [2024-07-25 01:08:29.870798] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:07.356 [2024-07-25 01:08:29.871017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167101 ] 00:42:07.614 [2024-07-25 01:08:30.046615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:07.614 [2024-07-25 01:08:30.228846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:09.248  Copying: 512/512 [B] (average 500 kBps) 00:42:09.248 00:42:09.249 01:08:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dhwboxuap6fxc071eru07msqu90gme9hehzc08yr6c6uajfa19thjric2d061aisp1lmb8yzweron2j33lzqbt5pw4037zdjxplatq1zq2awp580ep0g6p1b86f2k77sysc8s862v3j6hk0n3kls2e17malz9ek932vjk9fkikshjl2faz28kkjoppbnfxjfb6rjiagse5r44ldzxgskppmnikrjovcbokepjvy5kekmffoprb4h1h33h10kclgsigp5alr3ukeszwrnun8mh0csryyq7038ck9dnhrwjmhtglbmh630cxy6rinerg6jncmsuse3wzzppuo4ujl1ys9ulmea6alyw2wasw03xizghwfrnxk636otgazvoe6995ie9etzowbo0x4b75atfkfoc3yrluq8lm6el0pw1330xpublwapl8crq0gycjy5k2av8bag93bf7rktxsbrgayx84qj9g8acfz51oro0zfissr4ejv38ijr6y88houo == \d\h\w\b\o\x\u\a\p\6\f\x\c\0\7\1\e\r\u\0\7\m\s\q\u\9\0\g\m\e\9\h\e\h\z\c\0\8\y\r\6\c\6\u\a\j\f\a\1\9\t\h\j\r\i\c\2\d\0\6\1\a\i\s\p\1\l\m\b\8\y\z\w\e\r\o\n\2\j\3\3\l\z\q\b\t\5\p\w\4\0\3\7\z\d\j\x\p\l\a\t\q\1\z\q\2\a\w\p\5\8\0\e\p\0\g\6\p\1\b\8\6\f\2\k\7\7\s\y\s\c\8\s\8\6\2\v\3\j\6\h\k\0\n\3\k\l\s\2\e\1\7\m\a\l\z\9\e\k\9\3\2\v\j\k\9\f\k\i\k\s\h\j\l\2\f\a\z\2\8\k\k\j\o\p\p\b\n\f\x\j\f\b\6\r\j\i\a\g\s\e\5\r\4\4\l\d\z\x\g\s\k\p\p\m\n\i\k\r\j\o\v\c\b\o\k\e\p\j\v\y\5\k\e\k\m\f\f\o\p\r\b\4\h\1\h\3\3\h\1\0\k\c\l\g\s\i\g\p\5\a\l\r\3\u\k\e\s\z\w\r\n\u\n\8\m\h\0\c\s\r\y\y\q\7\0\3\8\c\k\9\d\n\h\r\w\j\m\h\t\g\l\b\m\h\6\3\0\c\x\y\6\r\i\n\e\r\g\6\j\n\c\m\s\u\s\e\3\w\z\z\p\p\u\o\4\u\j\l\1\y\s\9\u\l\m\e\a\6\a\l\y\w\2\w\a\s\w\0\3\x\i\z\g\h\w\f\r\n\x\k\6\3\6\o\t\g\a\z\v\o\e\6\9\9\5\i\e\9\e\t\z\o\w\b\o\0\x\4\b\7\5\a\t\f\k\f\o\c\3\y\r\l\u\q\8\l\m\6\e\l\0\p\w\1\3\3\0\x\p\u\b\l\w\a\p\l\8\c\r\q\0\g\y\c\j\y\5\k\2\a\v\8\b\a\g\9\3\b\f\7\r\k\t\x\s\b\r\g\a\y\x\8\4\q\j\9\g\8\a\c\f\z\5\1\o\r\o\0\z\f\i\s\s\r\4\e\j\v\3\8\i\j\r\6\y\8\8\h\o\u\o ]] 00:42:09.249 01:08:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:42:09.249 01:08:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:42:09.508 [2024-07-25 01:08:31.908146] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:09.508 [2024-07-25 01:08:31.908372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167128 ] 00:42:09.508 [2024-07-25 01:08:32.086774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:09.766 [2024-07-25 01:08:32.268277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:11.397  Copying: 512/512 [B] (average 166 kBps) 00:42:11.397 00:42:11.397 01:08:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dhwboxuap6fxc071eru07msqu90gme9hehzc08yr6c6uajfa19thjric2d061aisp1lmb8yzweron2j33lzqbt5pw4037zdjxplatq1zq2awp580ep0g6p1b86f2k77sysc8s862v3j6hk0n3kls2e17malz9ek932vjk9fkikshjl2faz28kkjoppbnfxjfb6rjiagse5r44ldzxgskppmnikrjovcbokepjvy5kekmffoprb4h1h33h10kclgsigp5alr3ukeszwrnun8mh0csryyq7038ck9dnhrwjmhtglbmh630cxy6rinerg6jncmsuse3wzzppuo4ujl1ys9ulmea6alyw2wasw03xizghwfrnxk636otgazvoe6995ie9etzowbo0x4b75atfkfoc3yrluq8lm6el0pw1330xpublwapl8crq0gycjy5k2av8bag93bf7rktxsbrgayx84qj9g8acfz51oro0zfissr4ejv38ijr6y88houo == \d\h\w\b\o\x\u\a\p\6\f\x\c\0\7\1\e\r\u\0\7\m\s\q\u\9\0\g\m\e\9\h\e\h\z\c\0\8\y\r\6\c\6\u\a\j\f\a\1\9\t\h\j\r\i\c\2\d\0\6\1\a\i\s\p\1\l\m\b\8\y\z\w\e\r\o\n\2\j\3\3\l\z\q\b\t\5\p\w\4\0\3\7\z\d\j\x\p\l\a\t\q\1\z\q\2\a\w\p\5\8\0\e\p\0\g\6\p\1\b\8\6\f\2\k\7\7\s\y\s\c\8\s\8\6\2\v\3\j\6\h\k\0\n\3\k\l\s\2\e\1\7\m\a\l\z\9\e\k\9\3\2\v\j\k\9\f\k\i\k\s\h\j\l\2\f\a\z\2\8\k\k\j\o\p\p\b\n\f\x\j\f\b\6\r\j\i\a\g\s\e\5\r\4\4\l\d\z\x\g\s\k\p\p\m\n\i\k\r\j\o\v\c\b\o\k\e\p\j\v\y\5\k\e\k\m\f\f\o\p\r\b\4\h\1\h\3\3\h\1\0\k\c\l\g\s\i\g\p\5\a\l\r\3\u\k\e\s\z\w\r\n\u\n\8\m\h\0\c\s\r\y\y\q\7\0\3\8\c\k\9\d\n\h\r\w\j\m\h\t\g\l\b\m\h\6\3\0\c\x\y\6\r\i\n\e\r\g\6\j\n\c\m\s\u\s\e\3\w\z\z\p\p\u\o\4\u\j\l\1\y\s\9\u\l\m\e\a\6\a\l\y\w\2\w\a\s\w\0\3\x\i\z\g\h\w\f\r\n\x\k\6\3\6\o\t\g\a\z\v\o\e\6\9\9\5\i\e\9\e\t\z\o\w\b\o\0\x\4\b\7\5\a\t\f\k\f\o\c\3\y\r\l\u\q\8\l\m\6\e\l\0\p\w\1\3\3\0\x\p\u\b\l\w\a\p\l\8\c\r\q\0\g\y\c\j\y\5\k\2\a\v\8\b\a\g\9\3\b\f\7\r\k\t\x\s\b\r\g\a\y\x\8\4\q\j\9\g\8\a\c\f\z\5\1\o\r\o\0\z\f\i\s\s\r\4\e\j\v\3\8\i\j\r\6\y\8\8\h\o\u\o ]] 00:42:11.397 01:08:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:42:11.397 01:08:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:42:11.397 [2024-07-25 01:08:33.905293] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:11.397 [2024-07-25 01:08:33.905800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167155 ] 00:42:11.655 [2024-07-25 01:08:34.063323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:11.655 [2024-07-25 01:08:34.251347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:13.286  Copying: 512/512 [B] (average 166 kBps) 00:42:13.286 00:42:13.286 01:08:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dhwboxuap6fxc071eru07msqu90gme9hehzc08yr6c6uajfa19thjric2d061aisp1lmb8yzweron2j33lzqbt5pw4037zdjxplatq1zq2awp580ep0g6p1b86f2k77sysc8s862v3j6hk0n3kls2e17malz9ek932vjk9fkikshjl2faz28kkjoppbnfxjfb6rjiagse5r44ldzxgskppmnikrjovcbokepjvy5kekmffoprb4h1h33h10kclgsigp5alr3ukeszwrnun8mh0csryyq7038ck9dnhrwjmhtglbmh630cxy6rinerg6jncmsuse3wzzppuo4ujl1ys9ulmea6alyw2wasw03xizghwfrnxk636otgazvoe6995ie9etzowbo0x4b75atfkfoc3yrluq8lm6el0pw1330xpublwapl8crq0gycjy5k2av8bag93bf7rktxsbrgayx84qj9g8acfz51oro0zfissr4ejv38ijr6y88houo == \d\h\w\b\o\x\u\a\p\6\f\x\c\0\7\1\e\r\u\0\7\m\s\q\u\9\0\g\m\e\9\h\e\h\z\c\0\8\y\r\6\c\6\u\a\j\f\a\1\9\t\h\j\r\i\c\2\d\0\6\1\a\i\s\p\1\l\m\b\8\y\z\w\e\r\o\n\2\j\3\3\l\z\q\b\t\5\p\w\4\0\3\7\z\d\j\x\p\l\a\t\q\1\z\q\2\a\w\p\5\8\0\e\p\0\g\6\p\1\b\8\6\f\2\k\7\7\s\y\s\c\8\s\8\6\2\v\3\j\6\h\k\0\n\3\k\l\s\2\e\1\7\m\a\l\z\9\e\k\9\3\2\v\j\k\9\f\k\i\k\s\h\j\l\2\f\a\z\2\8\k\k\j\o\p\p\b\n\f\x\j\f\b\6\r\j\i\a\g\s\e\5\r\4\4\l\d\z\x\g\s\k\p\p\m\n\i\k\r\j\o\v\c\b\o\k\e\p\j\v\y\5\k\e\k\m\f\f\o\p\r\b\4\h\1\h\3\3\h\1\0\k\c\l\g\s\i\g\p\5\a\l\r\3\u\k\e\s\z\w\r\n\u\n\8\m\h\0\c\s\r\y\y\q\7\0\3\8\c\k\9\d\n\h\r\w\j\m\h\t\g\l\b\m\h\6\3\0\c\x\y\6\r\i\n\e\r\g\6\j\n\c\m\s\u\s\e\3\w\z\z\p\p\u\o\4\u\j\l\1\y\s\9\u\l\m\e\a\6\a\l\y\w\2\w\a\s\w\0\3\x\i\z\g\h\w\f\r\n\x\k\6\3\6\o\t\g\a\z\v\o\e\6\9\9\5\i\e\9\e\t\z\o\w\b\o\0\x\4\b\7\5\a\t\f\k\f\o\c\3\y\r\l\u\q\8\l\m\6\e\l\0\p\w\1\3\3\0\x\p\u\b\l\w\a\p\l\8\c\r\q\0\g\y\c\j\y\5\k\2\a\v\8\b\a\g\9\3\b\f\7\r\k\t\x\s\b\r\g\a\y\x\8\4\q\j\9\g\8\a\c\f\z\5\1\o\r\o\0\z\f\i\s\s\r\4\e\j\v\3\8\i\j\r\6\y\8\8\h\o\u\o ]] 00:42:13.286 01:08:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:42:13.286 01:08:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:42:13.286 01:08:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:42:13.286 01:08:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:42:13.286 01:08:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:42:13.286 01:08:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:42:13.286 [2024-07-25 01:08:35.935113] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:13.286 [2024-07-25 01:08:35.935414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167183 ] 00:42:13.544 [2024-07-25 01:08:36.118920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:13.803 [2024-07-25 01:08:36.294215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.446  Copying: 512/512 [B] (average 500 kBps) 00:42:15.446 00:42:15.447 01:08:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0n9r64cri22vhwsltkysssl7o6gr1ajrnum8bu2t0gx6th2e9kayeuy7irgmk1gl8d9f9ocuhj9o0s7kituoo0z7i4u06cxa31n3kr0ja1a4bebuu2dtldu5gcd6o5viy4r42ef7rzi6z2n6e8ln8n1gs48usqruocvgg49f035rs92rba41plptp0u8lq24f1mjpk5ppgtl6vltpr6ns9l2m7j5x5dbqrkpdr8yna9nga17x3mgjtu9u3lhc9b09mxf4sus0e7qgh1zn8ff6hl6p4doljckydm05mn9sujohn3yr9paf4dq32yueqqnhkcbgv7lrry1idu9zliyohgmt8d8pihy38beza63ovsks6xv3mwn9lja47fc6ujto0gtaktxxfsggzkijountvg6a9yo1c5k2s5v07s21fjj6gw2emwj1kacpa4oxbl28vnljm4l1th7ym344amis3je03m4wd6ymkbfw5y0xtwwauh1s4i23986v04qd5o3 == \0\n\9\r\6\4\c\r\i\2\2\v\h\w\s\l\t\k\y\s\s\s\l\7\o\6\g\r\1\a\j\r\n\u\m\8\b\u\2\t\0\g\x\6\t\h\2\e\9\k\a\y\e\u\y\7\i\r\g\m\k\1\g\l\8\d\9\f\9\o\c\u\h\j\9\o\0\s\7\k\i\t\u\o\o\0\z\7\i\4\u\0\6\c\x\a\3\1\n\3\k\r\0\j\a\1\a\4\b\e\b\u\u\2\d\t\l\d\u\5\g\c\d\6\o\5\v\i\y\4\r\4\2\e\f\7\r\z\i\6\z\2\n\6\e\8\l\n\8\n\1\g\s\4\8\u\s\q\r\u\o\c\v\g\g\4\9\f\0\3\5\r\s\9\2\r\b\a\4\1\p\l\p\t\p\0\u\8\l\q\2\4\f\1\m\j\p\k\5\p\p\g\t\l\6\v\l\t\p\r\6\n\s\9\l\2\m\7\j\5\x\5\d\b\q\r\k\p\d\r\8\y\n\a\9\n\g\a\1\7\x\3\m\g\j\t\u\9\u\3\l\h\c\9\b\0\9\m\x\f\4\s\u\s\0\e\7\q\g\h\1\z\n\8\f\f\6\h\l\6\p\4\d\o\l\j\c\k\y\d\m\0\5\m\n\9\s\u\j\o\h\n\3\y\r\9\p\a\f\4\d\q\3\2\y\u\e\q\q\n\h\k\c\b\g\v\7\l\r\r\y\1\i\d\u\9\z\l\i\y\o\h\g\m\t\8\d\8\p\i\h\y\3\8\b\e\z\a\6\3\o\v\s\k\s\6\x\v\3\m\w\n\9\l\j\a\4\7\f\c\6\u\j\t\o\0\g\t\a\k\t\x\x\f\s\g\g\z\k\i\j\o\u\n\t\v\g\6\a\9\y\o\1\c\5\k\2\s\5\v\0\7\s\2\1\f\j\j\6\g\w\2\e\m\w\j\1\k\a\c\p\a\4\o\x\b\l\2\8\v\n\l\j\m\4\l\1\t\h\7\y\m\3\4\4\a\m\i\s\3\j\e\0\3\m\4\w\d\6\y\m\k\b\f\w\5\y\0\x\t\w\w\a\u\h\1\s\4\i\2\3\9\8\6\v\0\4\q\d\5\o\3 ]] 00:42:15.447 01:08:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:42:15.447 01:08:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:42:15.447 [2024-07-25 01:08:37.949681] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:15.447 [2024-07-25 01:08:37.949892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167208 ] 00:42:15.705 [2024-07-25 01:08:38.118975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.705 [2024-07-25 01:08:38.290818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:17.398  Copying: 512/512 [B] (average 500 kBps) 00:42:17.398 00:42:17.399 01:08:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0n9r64cri22vhwsltkysssl7o6gr1ajrnum8bu2t0gx6th2e9kayeuy7irgmk1gl8d9f9ocuhj9o0s7kituoo0z7i4u06cxa31n3kr0ja1a4bebuu2dtldu5gcd6o5viy4r42ef7rzi6z2n6e8ln8n1gs48usqruocvgg49f035rs92rba41plptp0u8lq24f1mjpk5ppgtl6vltpr6ns9l2m7j5x5dbqrkpdr8yna9nga17x3mgjtu9u3lhc9b09mxf4sus0e7qgh1zn8ff6hl6p4doljckydm05mn9sujohn3yr9paf4dq32yueqqnhkcbgv7lrry1idu9zliyohgmt8d8pihy38beza63ovsks6xv3mwn9lja47fc6ujto0gtaktxxfsggzkijountvg6a9yo1c5k2s5v07s21fjj6gw2emwj1kacpa4oxbl28vnljm4l1th7ym344amis3je03m4wd6ymkbfw5y0xtwwauh1s4i23986v04qd5o3 == \0\n\9\r\6\4\c\r\i\2\2\v\h\w\s\l\t\k\y\s\s\s\l\7\o\6\g\r\1\a\j\r\n\u\m\8\b\u\2\t\0\g\x\6\t\h\2\e\9\k\a\y\e\u\y\7\i\r\g\m\k\1\g\l\8\d\9\f\9\o\c\u\h\j\9\o\0\s\7\k\i\t\u\o\o\0\z\7\i\4\u\0\6\c\x\a\3\1\n\3\k\r\0\j\a\1\a\4\b\e\b\u\u\2\d\t\l\d\u\5\g\c\d\6\o\5\v\i\y\4\r\4\2\e\f\7\r\z\i\6\z\2\n\6\e\8\l\n\8\n\1\g\s\4\8\u\s\q\r\u\o\c\v\g\g\4\9\f\0\3\5\r\s\9\2\r\b\a\4\1\p\l\p\t\p\0\u\8\l\q\2\4\f\1\m\j\p\k\5\p\p\g\t\l\6\v\l\t\p\r\6\n\s\9\l\2\m\7\j\5\x\5\d\b\q\r\k\p\d\r\8\y\n\a\9\n\g\a\1\7\x\3\m\g\j\t\u\9\u\3\l\h\c\9\b\0\9\m\x\f\4\s\u\s\0\e\7\q\g\h\1\z\n\8\f\f\6\h\l\6\p\4\d\o\l\j\c\k\y\d\m\0\5\m\n\9\s\u\j\o\h\n\3\y\r\9\p\a\f\4\d\q\3\2\y\u\e\q\q\n\h\k\c\b\g\v\7\l\r\r\y\1\i\d\u\9\z\l\i\y\o\h\g\m\t\8\d\8\p\i\h\y\3\8\b\e\z\a\6\3\o\v\s\k\s\6\x\v\3\m\w\n\9\l\j\a\4\7\f\c\6\u\j\t\o\0\g\t\a\k\t\x\x\f\s\g\g\z\k\i\j\o\u\n\t\v\g\6\a\9\y\o\1\c\5\k\2\s\5\v\0\7\s\2\1\f\j\j\6\g\w\2\e\m\w\j\1\k\a\c\p\a\4\o\x\b\l\2\8\v\n\l\j\m\4\l\1\t\h\7\y\m\3\4\4\a\m\i\s\3\j\e\0\3\m\4\w\d\6\y\m\k\b\f\w\5\y\0\x\t\w\w\a\u\h\1\s\4\i\2\3\9\8\6\v\0\4\q\d\5\o\3 ]] 00:42:17.399 01:08:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:42:17.399 01:08:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:42:17.399 [2024-07-25 01:08:39.961188] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:17.399 [2024-07-25 01:08:39.961402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167236 ] 00:42:17.657 [2024-07-25 01:08:40.134625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.916 [2024-07-25 01:08:40.312348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:19.552  Copying: 512/512 [B] (average 250 kBps) 00:42:19.552 00:42:19.552 01:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0n9r64cri22vhwsltkysssl7o6gr1ajrnum8bu2t0gx6th2e9kayeuy7irgmk1gl8d9f9ocuhj9o0s7kituoo0z7i4u06cxa31n3kr0ja1a4bebuu2dtldu5gcd6o5viy4r42ef7rzi6z2n6e8ln8n1gs48usqruocvgg49f035rs92rba41plptp0u8lq24f1mjpk5ppgtl6vltpr6ns9l2m7j5x5dbqrkpdr8yna9nga17x3mgjtu9u3lhc9b09mxf4sus0e7qgh1zn8ff6hl6p4doljckydm05mn9sujohn3yr9paf4dq32yueqqnhkcbgv7lrry1idu9zliyohgmt8d8pihy38beza63ovsks6xv3mwn9lja47fc6ujto0gtaktxxfsggzkijountvg6a9yo1c5k2s5v07s21fjj6gw2emwj1kacpa4oxbl28vnljm4l1th7ym344amis3je03m4wd6ymkbfw5y0xtwwauh1s4i23986v04qd5o3 == \0\n\9\r\6\4\c\r\i\2\2\v\h\w\s\l\t\k\y\s\s\s\l\7\o\6\g\r\1\a\j\r\n\u\m\8\b\u\2\t\0\g\x\6\t\h\2\e\9\k\a\y\e\u\y\7\i\r\g\m\k\1\g\l\8\d\9\f\9\o\c\u\h\j\9\o\0\s\7\k\i\t\u\o\o\0\z\7\i\4\u\0\6\c\x\a\3\1\n\3\k\r\0\j\a\1\a\4\b\e\b\u\u\2\d\t\l\d\u\5\g\c\d\6\o\5\v\i\y\4\r\4\2\e\f\7\r\z\i\6\z\2\n\6\e\8\l\n\8\n\1\g\s\4\8\u\s\q\r\u\o\c\v\g\g\4\9\f\0\3\5\r\s\9\2\r\b\a\4\1\p\l\p\t\p\0\u\8\l\q\2\4\f\1\m\j\p\k\5\p\p\g\t\l\6\v\l\t\p\r\6\n\s\9\l\2\m\7\j\5\x\5\d\b\q\r\k\p\d\r\8\y\n\a\9\n\g\a\1\7\x\3\m\g\j\t\u\9\u\3\l\h\c\9\b\0\9\m\x\f\4\s\u\s\0\e\7\q\g\h\1\z\n\8\f\f\6\h\l\6\p\4\d\o\l\j\c\k\y\d\m\0\5\m\n\9\s\u\j\o\h\n\3\y\r\9\p\a\f\4\d\q\3\2\y\u\e\q\q\n\h\k\c\b\g\v\7\l\r\r\y\1\i\d\u\9\z\l\i\y\o\h\g\m\t\8\d\8\p\i\h\y\3\8\b\e\z\a\6\3\o\v\s\k\s\6\x\v\3\m\w\n\9\l\j\a\4\7\f\c\6\u\j\t\o\0\g\t\a\k\t\x\x\f\s\g\g\z\k\i\j\o\u\n\t\v\g\6\a\9\y\o\1\c\5\k\2\s\5\v\0\7\s\2\1\f\j\j\6\g\w\2\e\m\w\j\1\k\a\c\p\a\4\o\x\b\l\2\8\v\n\l\j\m\4\l\1\t\h\7\y\m\3\4\4\a\m\i\s\3\j\e\0\3\m\4\w\d\6\y\m\k\b\f\w\5\y\0\x\t\w\w\a\u\h\1\s\4\i\2\3\9\8\6\v\0\4\q\d\5\o\3 ]] 00:42:19.552 01:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:42:19.552 01:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:42:19.552 [2024-07-25 01:08:41.973675] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:19.552 [2024-07-25 01:08:41.974622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167261 ] 00:42:19.552 [2024-07-25 01:08:42.159490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:19.811 [2024-07-25 01:08:42.336279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:21.443  Copying: 512/512 [B] (average 166 kBps) 00:42:21.443 00:42:21.443 ************************************ 00:42:21.443 END TEST dd_flags_misc_forced_aio 00:42:21.443 ************************************ 00:42:21.443 01:08:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0n9r64cri22vhwsltkysssl7o6gr1ajrnum8bu2t0gx6th2e9kayeuy7irgmk1gl8d9f9ocuhj9o0s7kituoo0z7i4u06cxa31n3kr0ja1a4bebuu2dtldu5gcd6o5viy4r42ef7rzi6z2n6e8ln8n1gs48usqruocvgg49f035rs92rba41plptp0u8lq24f1mjpk5ppgtl6vltpr6ns9l2m7j5x5dbqrkpdr8yna9nga17x3mgjtu9u3lhc9b09mxf4sus0e7qgh1zn8ff6hl6p4doljckydm05mn9sujohn3yr9paf4dq32yueqqnhkcbgv7lrry1idu9zliyohgmt8d8pihy38beza63ovsks6xv3mwn9lja47fc6ujto0gtaktxxfsggzkijountvg6a9yo1c5k2s5v07s21fjj6gw2emwj1kacpa4oxbl28vnljm4l1th7ym344amis3je03m4wd6ymkbfw5y0xtwwauh1s4i23986v04qd5o3 == \0\n\9\r\6\4\c\r\i\2\2\v\h\w\s\l\t\k\y\s\s\s\l\7\o\6\g\r\1\a\j\r\n\u\m\8\b\u\2\t\0\g\x\6\t\h\2\e\9\k\a\y\e\u\y\7\i\r\g\m\k\1\g\l\8\d\9\f\9\o\c\u\h\j\9\o\0\s\7\k\i\t\u\o\o\0\z\7\i\4\u\0\6\c\x\a\3\1\n\3\k\r\0\j\a\1\a\4\b\e\b\u\u\2\d\t\l\d\u\5\g\c\d\6\o\5\v\i\y\4\r\4\2\e\f\7\r\z\i\6\z\2\n\6\e\8\l\n\8\n\1\g\s\4\8\u\s\q\r\u\o\c\v\g\g\4\9\f\0\3\5\r\s\9\2\r\b\a\4\1\p\l\p\t\p\0\u\8\l\q\2\4\f\1\m\j\p\k\5\p\p\g\t\l\6\v\l\t\p\r\6\n\s\9\l\2\m\7\j\5\x\5\d\b\q\r\k\p\d\r\8\y\n\a\9\n\g\a\1\7\x\3\m\g\j\t\u\9\u\3\l\h\c\9\b\0\9\m\x\f\4\s\u\s\0\e\7\q\g\h\1\z\n\8\f\f\6\h\l\6\p\4\d\o\l\j\c\k\y\d\m\0\5\m\n\9\s\u\j\o\h\n\3\y\r\9\p\a\f\4\d\q\3\2\y\u\e\q\q\n\h\k\c\b\g\v\7\l\r\r\y\1\i\d\u\9\z\l\i\y\o\h\g\m\t\8\d\8\p\i\h\y\3\8\b\e\z\a\6\3\o\v\s\k\s\6\x\v\3\m\w\n\9\l\j\a\4\7\f\c\6\u\j\t\o\0\g\t\a\k\t\x\x\f\s\g\g\z\k\i\j\o\u\n\t\v\g\6\a\9\y\o\1\c\5\k\2\s\5\v\0\7\s\2\1\f\j\j\6\g\w\2\e\m\w\j\1\k\a\c\p\a\4\o\x\b\l\2\8\v\n\l\j\m\4\l\1\t\h\7\y\m\3\4\4\a\m\i\s\3\j\e\0\3\m\4\w\d\6\y\m\k\b\f\w\5\y\0\x\t\w\w\a\u\h\1\s\4\i\2\3\9\8\6\v\0\4\q\d\5\o\3 ]] 00:42:21.443 00:42:21.443 real 0m16.174s 00:42:21.443 user 0m13.271s 00:42:21.443 sys 0m1.844s 00:42:21.443 01:08:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:21.443 01:08:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:42:21.443 01:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:42:21.443 01:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:42:21.443 01:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:42:21.443 ************************************ 00:42:21.443 END TEST spdk_dd_posix 00:42:21.443 ************************************ 00:42:21.443 00:42:21.443 real 1m7.757s 00:42:21.443 user 0m54.071s 00:42:21.443 sys 0m7.700s 00:42:21.443 01:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:21.443 01:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:42:21.443 01:08:44 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:42:21.443 01:08:44 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:21.443 01:08:44 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:21.443 01:08:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:21.443 ************************************ 00:42:21.443 START TEST spdk_dd_malloc 00:42:21.443 ************************************ 00:42:21.443 01:08:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:42:21.701 * Looking for test storage... 00:42:21.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:42:21.701 ************************************ 00:42:21.701 START TEST dd_malloc_copy 00:42:21.701 ************************************ 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:42:21.701 01:08:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:42:21.701 { 00:42:21.701 "subsystems": [ 00:42:21.701 { 00:42:21.701 "subsystem": "bdev", 00:42:21.701 "config": [ 00:42:21.701 { 00:42:21.701 "params": { 00:42:21.701 "block_size": 512, 00:42:21.701 "num_blocks": 1048576, 00:42:21.701 "name": "malloc0" 00:42:21.701 }, 00:42:21.701 "method": "bdev_malloc_create" 00:42:21.701 }, 00:42:21.701 { 00:42:21.701 "params": { 00:42:21.701 "block_size": 512, 00:42:21.701 "num_blocks": 1048576, 00:42:21.701 "name": "malloc1" 00:42:21.701 }, 00:42:21.701 "method": "bdev_malloc_create" 00:42:21.701 }, 00:42:21.701 { 00:42:21.701 "method": "bdev_wait_for_examine" 00:42:21.701 } 00:42:21.701 ] 00:42:21.701 } 00:42:21.701 ] 00:42:21.701 } 00:42:21.701 [2024-07-25 01:08:44.238355] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:21.701 [2024-07-25 01:08:44.238714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167371 ] 00:42:21.959 [2024-07-25 01:08:44.418501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:21.959 [2024-07-25 01:08:44.596549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.610  Copying: 236/512 [MB] (236 MBps) Copying: 474/512 [MB] (238 MBps) Copying: 512/512 [MB] (average 236 MBps) 00:42:29.610 00:42:29.610 01:08:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:42:29.610 01:08:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:42:29.610 01:08:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:42:29.610 01:08:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:42:29.610 [2024-07-25 01:08:52.244699] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:29.610 [2024-07-25 01:08:52.245033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167462 ] 00:42:29.610 { 00:42:29.610 "subsystems": [ 00:42:29.610 { 00:42:29.610 "subsystem": "bdev", 00:42:29.610 "config": [ 00:42:29.610 { 00:42:29.610 "params": { 00:42:29.610 "block_size": 512, 00:42:29.610 "num_blocks": 1048576, 00:42:29.610 "name": "malloc0" 00:42:29.610 }, 00:42:29.610 "method": "bdev_malloc_create" 00:42:29.610 }, 00:42:29.610 { 00:42:29.610 "params": { 00:42:29.610 "block_size": 512, 00:42:29.610 "num_blocks": 1048576, 00:42:29.610 "name": "malloc1" 00:42:29.610 }, 00:42:29.610 "method": "bdev_malloc_create" 00:42:29.610 }, 00:42:29.610 { 00:42:29.610 "method": "bdev_wait_for_examine" 00:42:29.610 } 00:42:29.610 ] 00:42:29.610 } 00:42:29.610 ] 00:42:29.610 } 00:42:29.869 [2024-07-25 01:08:52.406549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.128 [2024-07-25 01:08:52.598317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:37.784  Copying: 238/512 [MB] (238 MBps) Copying: 476/512 [MB] (237 MBps) Copying: 512/512 [MB] (average 237 MBps) 00:42:37.784 00:42:37.784 ************************************ 00:42:37.784 END TEST dd_malloc_copy 00:42:37.784 ************************************ 00:42:37.784 00:42:37.784 real 0m16.068s 00:42:37.784 user 0m14.839s 00:42:37.784 sys 0m1.083s 00:42:37.784 01:09:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:37.784 01:09:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:42:37.784 ************************************ 00:42:37.784 END TEST spdk_dd_malloc 00:42:37.784 ************************************ 00:42:37.784 00:42:37.784 real 0m16.220s 00:42:37.784 user 0m14.931s 00:42:37.784 sys 0m1.152s 00:42:37.784 01:09:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:37.784 01:09:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:42:37.784 01:09:00 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:42:37.784 01:09:00 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:37.784 01:09:00 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:37.784 01:09:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:37.784 ************************************ 00:42:37.784 START TEST spdk_dd_bdev_to_bdev 00:42:37.784 ************************************ 00:42:37.784 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:42:37.784 * Looking for test storage... 00:42:38.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:42:38.042 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:38.042 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:38.042 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:38.042 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:42:38.043 01:09:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:42:38.043 [2024-07-25 01:09:00.519909] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:38.043 [2024-07-25 01:09:00.520349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167622 ] 00:42:38.302 [2024-07-25 01:09:00.699950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:38.302 [2024-07-25 01:09:00.883010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:40.248  Copying: 256/256 [MB] (average 1280 MBps) 00:42:40.248 00:42:40.248 01:09:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:40.248 01:09:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:40.248 01:09:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:42:40.248 01:09:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:42:40.248 01:09:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:42:40.248 01:09:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:42:40.248 01:09:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:40.248 01:09:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:40.248 ************************************ 00:42:40.248 START TEST dd_inflate_file 00:42:40.248 ************************************ 00:42:40.248 01:09:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:42:40.248 [2024-07-25 01:09:02.771840] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:40.248 [2024-07-25 01:09:02.772057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167657 ] 00:42:40.506 [2024-07-25 01:09:02.946776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:40.506 [2024-07-25 01:09:03.135982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.446  Copying: 64/64 [MB] (average 1254 MBps) 00:42:42.446 00:42:42.446 ************************************ 00:42:42.446 END TEST dd_inflate_file 00:42:42.446 ************************************ 00:42:42.446 00:42:42.446 real 0m2.112s 00:42:42.446 user 0m1.672s 00:42:42.446 sys 0m0.297s 00:42:42.446 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:42.446 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:42:42.446 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:42:42.446 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:42:42.446 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:42:42.446 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:42:42.447 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:42:42.447 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:42.447 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:42:42.447 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:42.447 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:42.447 ************************************ 00:42:42.447 START TEST dd_copy_to_out_bdev 00:42:42.447 ************************************ 00:42:42.447 01:09:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:42:42.447 { 00:42:42.447 "subsystems": [ 00:42:42.447 { 00:42:42.447 "subsystem": "bdev", 00:42:42.447 "config": [ 00:42:42.447 { 00:42:42.447 "params": { 00:42:42.447 "block_size": 4096, 00:42:42.447 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:42.447 "name": "aio1" 00:42:42.447 }, 00:42:42.447 "method": "bdev_aio_create" 00:42:42.447 }, 00:42:42.447 { 00:42:42.447 "params": { 00:42:42.447 "trtype": "pcie", 00:42:42.447 "traddr": "0000:00:10.0", 00:42:42.447 "name": "Nvme0" 00:42:42.447 }, 00:42:42.447 "method": "bdev_nvme_attach_controller" 00:42:42.447 }, 00:42:42.447 { 00:42:42.447 "method": "bdev_wait_for_examine" 00:42:42.447 } 00:42:42.447 ] 00:42:42.447 } 00:42:42.447 ] 00:42:42.447 } 00:42:42.447 [2024-07-25 01:09:04.954468] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:42.447 [2024-07-25 01:09:04.955136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167716 ] 00:42:42.706 [2024-07-25 01:09:05.134586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.706 [2024-07-25 01:09:05.313637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:46.001  Copying: 55/64 [MB] (55 MBps) Copying: 64/64 [MB] (average 56 MBps) 00:42:46.001 00:42:46.001 ************************************ 00:42:46.001 END TEST dd_copy_to_out_bdev 00:42:46.001 ************************************ 00:42:46.001 00:42:46.001 real 0m3.380s 00:42:46.001 user 0m2.981s 00:42:46.001 sys 0m0.275s 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:46.001 ************************************ 00:42:46.001 START TEST dd_offset_magic 00:42:46.001 ************************************ 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:42:46.001 01:09:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:42:46.001 { 00:42:46.001 "subsystems": [ 00:42:46.001 { 00:42:46.001 "subsystem": "bdev", 00:42:46.001 "config": [ 00:42:46.001 { 00:42:46.001 "params": { 00:42:46.001 "block_size": 4096, 00:42:46.001 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:46.001 "name": "aio1" 00:42:46.002 }, 00:42:46.002 "method": "bdev_aio_create" 00:42:46.002 }, 00:42:46.002 { 00:42:46.002 "params": { 00:42:46.002 "trtype": "pcie", 00:42:46.002 "traddr": "0000:00:10.0", 00:42:46.002 "name": "Nvme0" 00:42:46.002 }, 00:42:46.002 "method": "bdev_nvme_attach_controller" 00:42:46.002 }, 00:42:46.002 { 00:42:46.002 "method": "bdev_wait_for_examine" 00:42:46.002 } 00:42:46.002 ] 00:42:46.002 } 00:42:46.002 ] 00:42:46.002 } 00:42:46.002 [2024-07-25 01:09:08.418593] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:46.002 [2024-07-25 01:09:08.418803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167776 ] 00:42:46.002 [2024-07-25 01:09:08.599833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:46.261 [2024-07-25 01:09:08.793217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:48.576  Copying: 65/65 [MB] (average 148 MBps) 00:42:48.576 00:42:48.576 01:09:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:42:48.576 01:09:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:42:48.576 01:09:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:42:48.576 01:09:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:42:48.576 [2024-07-25 01:09:11.117404] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:48.576 [2024-07-25 01:09:11.117548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167821 ] 00:42:48.576 { 00:42:48.576 "subsystems": [ 00:42:48.576 { 00:42:48.576 "subsystem": "bdev", 00:42:48.576 "config": [ 00:42:48.576 { 00:42:48.576 "params": { 00:42:48.576 "block_size": 4096, 00:42:48.576 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:48.576 "name": "aio1" 00:42:48.576 }, 00:42:48.576 "method": "bdev_aio_create" 00:42:48.576 }, 00:42:48.576 { 00:42:48.576 "params": { 00:42:48.576 "trtype": "pcie", 00:42:48.576 "traddr": "0000:00:10.0", 00:42:48.576 "name": "Nvme0" 00:42:48.576 }, 00:42:48.576 "method": "bdev_nvme_attach_controller" 00:42:48.576 }, 00:42:48.576 { 00:42:48.576 "method": "bdev_wait_for_examine" 00:42:48.576 } 00:42:48.576 ] 00:42:48.576 } 00:42:48.576 ] 00:42:48.576 } 00:42:48.835 [2024-07-25 01:09:11.278915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:48.835 [2024-07-25 01:09:11.465409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:50.781  Copying: 1024/1024 [kB] (average 500 MBps) 00:42:50.781 00:42:50.781 01:09:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:42:50.781 01:09:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:42:50.781 01:09:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:42:50.781 01:09:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:42:50.781 01:09:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:42:50.781 01:09:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:42:50.781 01:09:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:42:50.781 { 00:42:50.781 "subsystems": [ 00:42:50.781 { 00:42:50.781 "subsystem": "bdev", 00:42:50.781 "config": [ 00:42:50.781 { 00:42:50.781 "params": { 00:42:50.781 "block_size": 4096, 00:42:50.781 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:50.781 "name": "aio1" 00:42:50.781 }, 00:42:50.781 "method": "bdev_aio_create" 00:42:50.781 }, 00:42:50.781 { 00:42:50.781 "params": { 00:42:50.781 "trtype": "pcie", 00:42:50.781 "traddr": "0000:00:10.0", 00:42:50.781 "name": "Nvme0" 00:42:50.781 }, 00:42:50.781 "method": "bdev_nvme_attach_controller" 00:42:50.781 }, 00:42:50.781 { 00:42:50.781 "method": "bdev_wait_for_examine" 00:42:50.781 } 00:42:50.781 ] 00:42:50.781 } 00:42:50.782 ] 00:42:50.782 } 00:42:50.782 [2024-07-25 01:09:13.309675] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:50.782 [2024-07-25 01:09:13.309826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167855 ] 00:42:51.040 [2024-07-25 01:09:13.468651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:51.040 [2024-07-25 01:09:13.649872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:53.352  Copying: 65/65 [MB] (average 168 MBps) 00:42:53.352 00:42:53.352 01:09:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:42:53.352 01:09:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:42:53.352 01:09:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:42:53.352 01:09:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:42:53.352 [2024-07-25 01:09:15.735341] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:53.352 [2024-07-25 01:09:15.735648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167891 ] 00:42:53.352 { 00:42:53.352 "subsystems": [ 00:42:53.352 { 00:42:53.352 "subsystem": "bdev", 00:42:53.352 "config": [ 00:42:53.352 { 00:42:53.352 "params": { 00:42:53.352 "block_size": 4096, 00:42:53.352 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:53.352 "name": "aio1" 00:42:53.352 }, 00:42:53.352 "method": "bdev_aio_create" 00:42:53.352 }, 00:42:53.352 { 00:42:53.352 "params": { 00:42:53.352 "trtype": "pcie", 00:42:53.352 "traddr": "0000:00:10.0", 00:42:53.352 "name": "Nvme0" 00:42:53.352 }, 00:42:53.352 "method": "bdev_nvme_attach_controller" 00:42:53.352 }, 00:42:53.352 { 00:42:53.352 "method": "bdev_wait_for_examine" 00:42:53.352 } 00:42:53.352 ] 00:42:53.352 } 00:42:53.352 ] 00:42:53.352 } 00:42:53.352 [2024-07-25 01:09:15.891667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:53.610 [2024-07-25 01:09:16.079569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:55.249  Copying: 1024/1024 [kB] (average 500 MBps) 00:42:55.249 00:42:55.249 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:42:55.249 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:42:55.249 00:42:55.249 real 0m9.545s 00:42:55.249 user 0m7.260s 00:42:55.249 sys 0m1.189s 00:42:55.249 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:55.249 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:42:55.249 ************************************ 00:42:55.249 END TEST dd_offset_magic 00:42:55.249 ************************************ 00:42:55.508 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:42:55.508 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:42:55.508 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:42:55.508 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:42:55.508 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:42:55.508 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:42:55.508 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:42:55.508 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:42:55.508 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:42:55.508 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:42:55.508 01:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:55.508 { 00:42:55.508 "subsystems": [ 00:42:55.508 { 00:42:55.508 "subsystem": "bdev", 00:42:55.508 "config": [ 00:42:55.508 { 00:42:55.508 "params": { 00:42:55.508 "block_size": 4096, 00:42:55.508 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:55.508 "name": "aio1" 00:42:55.508 }, 00:42:55.508 "method": "bdev_aio_create" 00:42:55.508 }, 00:42:55.508 { 00:42:55.508 "params": { 00:42:55.508 "trtype": "pcie", 00:42:55.508 "traddr": "0000:00:10.0", 00:42:55.508 "name": "Nvme0" 00:42:55.508 }, 00:42:55.508 "method": "bdev_nvme_attach_controller" 00:42:55.508 }, 00:42:55.508 { 00:42:55.508 "method": "bdev_wait_for_examine" 00:42:55.508 } 00:42:55.508 ] 00:42:55.508 } 00:42:55.508 ] 00:42:55.508 } 00:42:55.508 [2024-07-25 01:09:18.015190] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:55.508 [2024-07-25 01:09:18.015398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167935 ] 00:42:55.768 [2024-07-25 01:09:18.195655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:55.768 [2024-07-25 01:09:18.387262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:57.714  Copying: 5120/5120 [kB] (average 1000 MBps) 00:42:57.714 00:42:57.714 01:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:42:57.714 01:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:42:57.714 01:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:42:57.714 01:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:42:57.714 01:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:42:57.714 01:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:42:57.714 01:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:42:57.714 01:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:42:57.714 01:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:42:57.714 01:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:57.714 { 00:42:57.714 "subsystems": [ 00:42:57.714 { 00:42:57.714 "subsystem": "bdev", 00:42:57.714 "config": [ 00:42:57.714 { 00:42:57.714 "params": { 00:42:57.714 "block_size": 4096, 00:42:57.714 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:57.714 "name": "aio1" 00:42:57.714 }, 00:42:57.714 "method": "bdev_aio_create" 00:42:57.714 }, 00:42:57.714 { 00:42:57.714 "params": { 00:42:57.714 "trtype": "pcie", 00:42:57.714 "traddr": "0000:00:10.0", 00:42:57.714 "name": "Nvme0" 00:42:57.714 }, 00:42:57.714 "method": "bdev_nvme_attach_controller" 00:42:57.714 }, 00:42:57.714 { 00:42:57.714 "method": "bdev_wait_for_examine" 00:42:57.714 } 00:42:57.714 ] 00:42:57.714 } 00:42:57.714 ] 00:42:57.714 } 00:42:57.714 [2024-07-25 01:09:20.122544] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:42:57.714 [2024-07-25 01:09:20.122752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167970 ] 00:42:57.714 [2024-07-25 01:09:20.302258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:57.973 [2024-07-25 01:09:20.489922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:59.909  Copying: 5120/5120 [kB] (average 178 MBps) 00:42:59.909 00:42:59.909 01:09:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:42:59.909 00:42:59.909 real 0m21.990s 00:42:59.909 user 0m17.326s 00:42:59.909 sys 0m2.921s 00:42:59.909 01:09:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:59.909 01:09:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:59.909 ************************************ 00:42:59.909 END TEST spdk_dd_bdev_to_bdev 00:42:59.909 ************************************ 00:42:59.909 01:09:22 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:42:59.909 01:09:22 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:42:59.909 01:09:22 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:59.909 01:09:22 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:59.909 01:09:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:59.909 ************************************ 00:42:59.909 START TEST spdk_dd_sparse 00:42:59.909 ************************************ 00:42:59.909 01:09:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:42:59.909 * Looking for test storage... 00:42:59.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:42:59.909 01:09:22 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:59.909 01:09:22 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:59.909 01:09:22 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:59.909 01:09:22 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:59.909 01:09:22 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:42:59.910 1+0 records in 00:42:59.910 1+0 records out 00:42:59.910 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00896499 s, 468 MB/s 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:42:59.910 1+0 records in 00:42:59.910 1+0 records out 00:42:59.910 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.01127 s, 372 MB/s 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:42:59.910 1+0 records in 00:42:59.910 1+0 records out 00:42:59.910 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00672906 s, 623 MB/s 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:42:59.910 ************************************ 00:42:59.910 START TEST dd_sparse_file_to_file 00:42:59.910 ************************************ 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:42:59.910 01:09:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:43:00.167 { 00:43:00.167 "subsystems": [ 00:43:00.167 { 00:43:00.167 "subsystem": "bdev", 00:43:00.167 "config": [ 00:43:00.167 { 00:43:00.167 "params": { 00:43:00.167 "block_size": 4096, 00:43:00.167 "filename": "dd_sparse_aio_disk", 00:43:00.167 "name": "dd_aio" 00:43:00.167 }, 00:43:00.167 "method": "bdev_aio_create" 00:43:00.167 }, 00:43:00.167 { 00:43:00.167 "params": { 00:43:00.167 "lvs_name": "dd_lvstore", 00:43:00.167 "bdev_name": "dd_aio" 00:43:00.167 }, 00:43:00.167 "method": "bdev_lvol_create_lvstore" 00:43:00.167 }, 00:43:00.167 { 00:43:00.167 "method": "bdev_wait_for_examine" 00:43:00.167 } 00:43:00.167 ] 00:43:00.167 } 00:43:00.167 ] 00:43:00.167 } 00:43:00.167 [2024-07-25 01:09:22.630631] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:00.167 [2024-07-25 01:09:22.630836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168067 ] 00:43:00.167 [2024-07-25 01:09:22.810394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:00.424 [2024-07-25 01:09:22.993393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.361  Copying: 12/36 [MB] (average 1000 MBps) 00:43:02.361 00:43:02.361 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:43:02.361 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:43:02.361 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:43:02.361 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:43:02.361 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:43:02.361 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:43:02.361 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:43:02.361 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:43:02.361 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:43:02.361 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:43:02.361 00:43:02.361 real 0m2.282s 00:43:02.361 user 0m1.865s 00:43:02.361 sys 0m0.270s 00:43:02.361 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:02.362 ************************************ 00:43:02.362 END TEST dd_sparse_file_to_file 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:43:02.362 ************************************ 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:43:02.362 ************************************ 00:43:02.362 START TEST dd_sparse_file_to_bdev 00:43:02.362 ************************************ 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:43:02.362 01:09:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:43:02.362 { 00:43:02.362 "subsystems": [ 00:43:02.362 { 00:43:02.362 "subsystem": "bdev", 00:43:02.362 "config": [ 00:43:02.362 { 00:43:02.362 "params": { 00:43:02.362 "block_size": 4096, 00:43:02.362 "filename": "dd_sparse_aio_disk", 00:43:02.362 "name": "dd_aio" 00:43:02.362 }, 00:43:02.362 "method": "bdev_aio_create" 00:43:02.362 }, 00:43:02.362 { 00:43:02.362 "params": { 00:43:02.362 "lvs_name": "dd_lvstore", 00:43:02.362 "lvol_name": "dd_lvol", 00:43:02.362 "size_in_mib": 36, 00:43:02.362 "thin_provision": true 00:43:02.362 }, 00:43:02.362 "method": "bdev_lvol_create" 00:43:02.362 }, 00:43:02.362 { 00:43:02.362 "method": "bdev_wait_for_examine" 00:43:02.362 } 00:43:02.362 ] 00:43:02.362 } 00:43:02.362 ] 00:43:02.362 } 00:43:02.362 [2024-07-25 01:09:24.975967] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:02.362 [2024-07-25 01:09:24.976179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168128 ] 00:43:02.620 [2024-07-25 01:09:25.155979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.878 [2024-07-25 01:09:25.348925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:04.513  Copying: 12/36 [MB] (average 480 MBps) 00:43:04.513 00:43:04.513 00:43:04.513 real 0m2.211s 00:43:04.513 user 0m1.846s 00:43:04.513 sys 0m0.253s 00:43:04.513 01:09:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:04.513 01:09:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:43:04.513 ************************************ 00:43:04.513 END TEST dd_sparse_file_to_bdev 00:43:04.513 ************************************ 00:43:04.513 01:09:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:43:04.513 01:09:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:04.513 01:09:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:04.513 01:09:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:43:04.772 ************************************ 00:43:04.772 START TEST dd_sparse_bdev_to_file 00:43:04.772 ************************************ 00:43:04.772 01:09:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:43:04.772 01:09:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:43:04.772 01:09:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:43:04.772 01:09:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:43:04.772 01:09:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:43:04.772 01:09:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:43:04.772 01:09:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:43:04.772 01:09:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:43:04.772 01:09:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:43:04.772 { 00:43:04.772 "subsystems": [ 00:43:04.772 { 00:43:04.772 "subsystem": "bdev", 00:43:04.772 "config": [ 00:43:04.772 { 00:43:04.772 "params": { 00:43:04.772 "block_size": 4096, 00:43:04.772 "filename": "dd_sparse_aio_disk", 00:43:04.772 "name": "dd_aio" 00:43:04.772 }, 00:43:04.772 "method": "bdev_aio_create" 00:43:04.772 }, 00:43:04.772 { 00:43:04.772 "method": "bdev_wait_for_examine" 00:43:04.772 } 00:43:04.772 ] 00:43:04.772 } 00:43:04.772 ] 00:43:04.772 } 00:43:04.772 [2024-07-25 01:09:27.257679] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:04.772 [2024-07-25 01:09:27.257889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168185 ] 00:43:05.032 [2024-07-25 01:09:27.439042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:05.032 [2024-07-25 01:09:27.615987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:06.976  Copying: 12/36 [MB] (average 1000 MBps) 00:43:06.976 00:43:06.976 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:43:06.976 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:43:06.976 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:43:06.977 00:43:06.977 real 0m2.232s 00:43:06.977 user 0m1.854s 00:43:06.977 sys 0m0.267s 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:43:06.977 ************************************ 00:43:06.977 END TEST dd_sparse_bdev_to_file 00:43:06.977 ************************************ 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:43:06.977 00:43:06.977 real 0m7.100s 00:43:06.977 user 0m5.734s 00:43:06.977 sys 0m1.005s 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:06.977 ************************************ 00:43:06.977 01:09:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:43:06.977 END TEST spdk_dd_sparse 00:43:06.977 ************************************ 00:43:06.977 01:09:29 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:43:06.977 01:09:29 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:06.977 01:09:29 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:06.977 01:09:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:43:06.977 ************************************ 00:43:06.977 START TEST spdk_dd_negative 00:43:06.977 ************************************ 00:43:06.977 01:09:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:43:07.236 * Looking for test storage... 00:43:07.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:07.236 ************************************ 00:43:07.236 START TEST dd_invalid_arguments 00:43:07.236 ************************************ 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:07.236 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:43:07.236 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:43:07.237 00:43:07.237 CPU options: 00:43:07.237 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:43:07.237 (like [0,1,10]) 00:43:07.237 --lcores lcore to CPU mapping list. The list is in the format: 00:43:07.237 [<,lcores[@CPUs]>...] 00:43:07.237 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:43:07.237 Within the group, '-' is used for range separator, 00:43:07.237 ',' is used for single number separator. 00:43:07.237 '( )' can be omitted for single element group, 00:43:07.237 '@' can be omitted if cpus and lcores have the same value 00:43:07.237 --disable-cpumask-locks Disable CPU core lock files. 00:43:07.237 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:43:07.237 pollers in the app support interrupt mode) 00:43:07.237 -p, --main-core main (primary) core for DPDK 00:43:07.237 00:43:07.237 Configuration options: 00:43:07.237 -c, --config, --json JSON config file 00:43:07.237 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:43:07.237 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:43:07.237 --wait-for-rpc wait for RPCs to initialize subsystems 00:43:07.237 --rpcs-allowed comma-separated list of permitted RPCS 00:43:07.237 --json-ignore-init-errors don't exit on invalid config entry 00:43:07.237 00:43:07.237 Memory options: 00:43:07.237 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:43:07.237 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:43:07.237 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:43:07.237 -R, --huge-unlink unlink huge files after initialization 00:43:07.237 -n, --mem-channels number of memory channels used for DPDK 00:43:07.237 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:43:07.237 --msg-mempool-size global message memory pool size in count (default: 262143) 00:43:07.237 --no-huge run without using hugepages 00:43:07.237 -i, --shm-id shared memory ID (optional) 00:43:07.237 -g, --single-file-segments force creating just one hugetlbfs file 00:43:07.237 00:43:07.237 PCI options: 00:43:07.237 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:43:07.237 -B, --pci-blocked pci addr to block (can be used more than once) 00:43:07.237 -u, --no-pci disable PCI access 00:43:07.237 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:43:07.237 00:43:07.237 Log options: 00:43:07.237 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:43:07.237 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:43:07.237 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:43:07.237 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:43:07.237 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:43:07.237 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:43:07.237 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:43:07.237 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:43:07.237 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:43:07.237 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:43:07.237 virtio_vfio_user, vmd) 00:43:07.237 --silence-noticelog disable notice level logging to stderr 00:43:07.237 00:43:07.237 Trace options: 00:43:07.237 --num-trace-entries number of trace entries for each core, must be power of 2, 00:43:07.237 setting 0 to disable trace (default 32768) 00:43:07.237 Tracepoints vary in size and can use more than one trace entry. 00:43:07.237 -e, --tpoint-group [:] 00:43:07.237 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:43:07.237 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:43:07.237 [2024-07-25 01:09:29.755350] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:43:07.237 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:43:07.237 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:43:07.237 a tracepoint group. First tpoint inside a group can be enabled by 00:43:07.237 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:43:07.237 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:43:07.237 in /include/spdk_internal/trace_defs.h 00:43:07.237 00:43:07.237 Other options: 00:43:07.237 -h, --help show this usage 00:43:07.237 -v, --version print SPDK version 00:43:07.237 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:43:07.237 --env-context Opaque context for use of the env implementation 00:43:07.237 00:43:07.237 Application specific: 00:43:07.237 [--------- DD Options ---------] 00:43:07.237 --if Input file. Must specify either --if or --ib. 00:43:07.237 --ib Input bdev. Must specifier either --if or --ib 00:43:07.237 --of Output file. Must specify either --of or --ob. 00:43:07.237 --ob Output bdev. Must specify either --of or --ob. 00:43:07.237 --iflag Input file flags. 00:43:07.237 --oflag Output file flags. 00:43:07.237 --bs I/O unit size (default: 4096) 00:43:07.237 --qd Queue depth (default: 2) 00:43:07.237 --count I/O unit count. The number of I/O units to copy. (default: all) 00:43:07.237 --skip Skip this many I/O units at start of input. (default: 0) 00:43:07.237 --seek Skip this many I/O units at start of output. (default: 0) 00:43:07.237 --aio Force usage of AIO. (by default io_uring is used if available) 00:43:07.237 --sparse Enable hole skipping in input target 00:43:07.237 Available iflag and oflag values: 00:43:07.237 append - append mode 00:43:07.237 direct - use direct I/O for data 00:43:07.237 directory - fail unless a directory 00:43:07.237 dsync - use synchronized I/O for data 00:43:07.237 noatime - do not update access time 00:43:07.237 noctty - do not assign controlling terminal from file 00:43:07.237 nofollow - do not follow symlinks 00:43:07.237 nonblock - use non-blocking I/O 00:43:07.237 sync - use synchronized I/O for data and metadata 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:07.237 00:43:07.237 real 0m0.134s 00:43:07.237 user 0m0.054s 00:43:07.237 sys 0m0.080s 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:43:07.237 ************************************ 00:43:07.237 END TEST dd_invalid_arguments 00:43:07.237 ************************************ 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:07.237 ************************************ 00:43:07.237 START TEST dd_double_input 00:43:07.237 ************************************ 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:07.237 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:43:07.497 [2024-07-25 01:09:29.949062] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:43:07.497 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:43:07.497 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:07.497 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:07.497 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:07.497 00:43:07.497 real 0m0.129s 00:43:07.497 user 0m0.066s 00:43:07.497 sys 0m0.063s 00:43:07.497 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:07.497 ************************************ 00:43:07.497 END TEST dd_double_input 00:43:07.497 01:09:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:43:07.497 ************************************ 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:07.497 ************************************ 00:43:07.497 START TEST dd_double_output 00:43:07.497 ************************************ 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:07.497 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:43:07.497 [2024-07-25 01:09:30.142600] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:07.757 00:43:07.757 real 0m0.131s 00:43:07.757 user 0m0.062s 00:43:07.757 sys 0m0.069s 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:43:07.757 ************************************ 00:43:07.757 END TEST dd_double_output 00:43:07.757 ************************************ 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:07.757 ************************************ 00:43:07.757 START TEST dd_no_input 00:43:07.757 ************************************ 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:43:07.757 [2024-07-25 01:09:30.339723] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:07.757 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:07.757 00:43:07.757 real 0m0.134s 00:43:07.757 user 0m0.075s 00:43:07.757 sys 0m0.059s 00:43:07.758 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:07.758 ************************************ 00:43:07.758 END TEST dd_no_input 00:43:07.758 ************************************ 00:43:07.758 01:09:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:08.017 ************************************ 00:43:08.017 START TEST dd_no_output 00:43:08.017 ************************************ 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:08.017 [2024-07-25 01:09:30.538951] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:08.017 00:43:08.017 real 0m0.135s 00:43:08.017 user 0m0.035s 00:43:08.017 sys 0m0.101s 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:08.017 ************************************ 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:43:08.017 END TEST dd_no_output 00:43:08.017 ************************************ 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:08.017 ************************************ 00:43:08.017 START TEST dd_wrong_blocksize 00:43:08.017 ************************************ 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:08.017 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:43:08.281 [2024-07-25 01:09:30.736113] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:08.281 00:43:08.281 real 0m0.134s 00:43:08.281 user 0m0.072s 00:43:08.281 sys 0m0.062s 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:08.281 ************************************ 00:43:08.281 END TEST dd_wrong_blocksize 00:43:08.281 ************************************ 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:08.281 ************************************ 00:43:08.281 START TEST dd_smaller_blocksize 00:43:08.281 ************************************ 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:08.281 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:08.282 01:09:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:43:08.282 [2024-07-25 01:09:30.931509] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:08.547 [2024-07-25 01:09:30.931730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168464 ] 00:43:08.547 [2024-07-25 01:09:31.114877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.806 [2024-07-25 01:09:31.379745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:09.373 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:43:09.374 [2024-07-25 01:09:32.022517] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:43:09.374 [2024-07-25 01:09:32.022599] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:10.309 [2024-07-25 01:09:32.852770] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:10.907 00:43:10.907 real 0m2.436s 00:43:10.907 user 0m1.842s 00:43:10.907 sys 0m0.492s 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:10.907 ************************************ 00:43:10.907 END TEST dd_smaller_blocksize 00:43:10.907 ************************************ 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:10.907 ************************************ 00:43:10.907 START TEST dd_invalid_count 00:43:10.907 ************************************ 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:43:10.907 [2024-07-25 01:09:33.441169] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:10.907 00:43:10.907 real 0m0.139s 00:43:10.907 user 0m0.082s 00:43:10.907 sys 0m0.054s 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:10.907 ************************************ 00:43:10.907 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:43:10.907 END TEST dd_invalid_count 00:43:10.907 ************************************ 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:11.179 ************************************ 00:43:11.179 START TEST dd_invalid_oflag 00:43:11.179 ************************************ 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:43:11.179 [2024-07-25 01:09:33.631160] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:11.179 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:11.179 00:43:11.179 real 0m0.135s 00:43:11.180 user 0m0.076s 00:43:11.180 sys 0m0.059s 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:43:11.180 ************************************ 00:43:11.180 END TEST dd_invalid_oflag 00:43:11.180 ************************************ 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:11.180 ************************************ 00:43:11.180 START TEST dd_invalid_iflag 00:43:11.180 ************************************ 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:11.180 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:43:11.180 [2024-07-25 01:09:33.821384] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:11.438 ************************************ 00:43:11.438 END TEST dd_invalid_iflag 00:43:11.438 ************************************ 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:11.438 00:43:11.438 real 0m0.139s 00:43:11.438 user 0m0.072s 00:43:11.438 sys 0m0.067s 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:11.438 ************************************ 00:43:11.438 START TEST dd_unknown_flag 00:43:11.438 ************************************ 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:11.438 01:09:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:43:11.438 [2024-07-25 01:09:34.028371] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:11.438 [2024-07-25 01:09:34.028597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168599 ] 00:43:11.697 [2024-07-25 01:09:34.207556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:11.955 [2024-07-25 01:09:34.384858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:12.213 [2024-07-25 01:09:34.670109] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:43:12.213 [2024-07-25 01:09:34.670200] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:12.213  Copying: 0/0 [B] (average 0 Bps)[2024-07-25 01:09:34.670386] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:43:13.149 [2024-07-25 01:09:35.494993] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:43:13.408 00:43:13.408 00:43:13.408 01:09:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:43:13.408 01:09:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:13.408 01:09:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:43:13.408 01:09:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:43:13.408 01:09:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:43:13.408 01:09:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:13.408 00:43:13.408 real 0m2.044s 00:43:13.408 user 0m1.650s 00:43:13.408 sys 0m0.245s 00:43:13.408 01:09:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:13.408 01:09:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:43:13.408 ************************************ 00:43:13.408 END TEST dd_unknown_flag 00:43:13.408 ************************************ 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:13.408 ************************************ 00:43:13.408 START TEST dd_invalid_json 00:43:13.408 ************************************ 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:13.408 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:43:13.666 [2024-07-25 01:09:36.130889] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:13.666 [2024-07-25 01:09:36.131109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168649 ] 00:43:13.666 [2024-07-25 01:09:36.311741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:13.925 [2024-07-25 01:09:36.500498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:13.925 [2024-07-25 01:09:36.500600] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:43:13.925 [2024-07-25 01:09:36.500655] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:13.925 [2024-07-25 01:09:36.500681] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:13.925 [2024-07-25 01:09:36.500737] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:43:14.493 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:43:14.493 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:14.493 ************************************ 00:43:14.493 END TEST dd_invalid_json 00:43:14.493 ************************************ 00:43:14.493 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:43:14.493 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:43:14.493 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:43:14.493 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:14.493 00:43:14.493 real 0m0.876s 00:43:14.493 user 0m0.620s 00:43:14.493 sys 0m0.157s 00:43:14.493 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:14.493 01:09:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:43:14.493 00:43:14.493 real 0m7.432s 00:43:14.493 user 0m5.189s 00:43:14.493 sys 0m1.916s 00:43:14.493 01:09:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:14.493 01:09:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:43:14.493 ************************************ 00:43:14.493 END TEST spdk_dd_negative 00:43:14.493 ************************************ 00:43:14.493 00:43:14.493 real 2m51.942s 00:43:14.493 user 2m19.280s 00:43:14.493 sys 0m22.464s 00:43:14.493 01:09:37 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:14.493 01:09:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:43:14.493 ************************************ 00:43:14.493 END TEST spdk_dd 00:43:14.493 ************************************ 00:43:14.493 01:09:37 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:43:14.493 01:09:37 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:43:14.493 01:09:37 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:14.493 01:09:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:14.493 01:09:37 -- common/autotest_common.sh@10 -- # set +x 00:43:14.493 ************************************ 00:43:14.493 START TEST blockdev_nvme 00:43:14.493 ************************************ 00:43:14.493 01:09:37 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:43:14.752 * Looking for test storage... 00:43:14.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:43:14.752 01:09:37 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=168745 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 168745 00:43:14.752 01:09:37 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:43:14.752 01:09:37 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 168745 ']' 00:43:14.752 01:09:37 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:14.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:14.752 01:09:37 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:14.752 01:09:37 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:14.752 01:09:37 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:14.752 01:09:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:14.752 [2024-07-25 01:09:37.297740] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:14.752 [2024-07-25 01:09:37.297969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168745 ] 00:43:15.012 [2024-07-25 01:09:37.479415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:15.270 [2024-07-25 01:09:37.666947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:15.837 01:09:38 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:15.837 01:09:38 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:43:15.837 01:09:38 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:43:15.837 01:09:38 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:43:15.837 01:09:38 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:43:15.837 01:09:38 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:43:15.837 01:09:38 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:15.837 01:09:38 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:43:15.837 01:09:38 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:15.837 01:09:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:16.096 01:09:38 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:16.096 01:09:38 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:43:16.096 01:09:38 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:16.096 01:09:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:16.096 01:09:38 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:16.096 01:09:38 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:43:16.096 01:09:38 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:43:16.096 01:09:38 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:16.096 01:09:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:16.096 01:09:38 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:16.096 01:09:38 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:16.097 01:09:38 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:16.097 01:09:38 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:43:16.097 01:09:38 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:16.097 01:09:38 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:16.097 01:09:38 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:43:16.097 01:09:38 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:43:16.097 01:09:38 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e4cb3de2-4f61-4895-b354-a26be27691a1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e4cb3de2-4f61-4895-b354-a26be27691a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:43:16.097 01:09:38 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:43:16.097 01:09:38 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:43:16.097 01:09:38 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:43:16.097 01:09:38 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 168745 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 168745 ']' 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 168745 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 168745 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:16.097 killing process with pid 168745 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 168745' 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 168745 00:43:16.097 01:09:38 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 168745 00:43:18.676 01:09:41 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:18.676 01:09:41 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:43:18.676 01:09:41 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:43:18.676 01:09:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:18.676 01:09:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:18.676 ************************************ 00:43:18.676 START TEST bdev_hello_world 00:43:18.676 ************************************ 00:43:18.676 01:09:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:43:18.676 [2024-07-25 01:09:41.184140] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:18.676 [2024-07-25 01:09:41.184358] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168836 ] 00:43:18.940 [2024-07-25 01:09:41.363314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:18.941 [2024-07-25 01:09:41.552298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:19.508 [2024-07-25 01:09:42.016841] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:43:19.508 [2024-07-25 01:09:42.016925] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:43:19.508 [2024-07-25 01:09:42.016975] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:43:19.508 [2024-07-25 01:09:42.019805] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:43:19.508 [2024-07-25 01:09:42.020431] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:43:19.508 [2024-07-25 01:09:42.020467] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:43:19.508 [2024-07-25 01:09:42.020698] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:43:19.508 00:43:19.508 [2024-07-25 01:09:42.020748] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:43:20.888 00:43:20.888 real 0m2.015s 00:43:20.888 user 0m1.691s 00:43:20.888 sys 0m0.224s 00:43:20.888 01:09:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:20.888 01:09:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:20.888 ************************************ 00:43:20.888 END TEST bdev_hello_world 00:43:20.888 ************************************ 00:43:20.888 01:09:43 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:43:20.888 01:09:43 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:20.888 01:09:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:20.888 01:09:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:20.889 ************************************ 00:43:20.889 START TEST bdev_bounds 00:43:20.889 ************************************ 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=168881 00:43:20.889 Process bdevio pid: 168881 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 168881' 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 168881 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 168881 ']' 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:20.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:20.889 01:09:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:20.889 [2024-07-25 01:09:43.272178] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:20.889 [2024-07-25 01:09:43.272402] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168881 ] 00:43:20.889 [2024-07-25 01:09:43.461032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:21.147 [2024-07-25 01:09:43.658346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:21.147 [2024-07-25 01:09:43.658432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:21.147 [2024-07-25 01:09:43.658439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:43:21.713 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:21.713 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:43:21.713 01:09:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:43:21.713 I/O targets: 00:43:21.713 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:43:21.713 00:43:21.713 00:43:21.713 CUnit - A unit testing framework for C - Version 2.1-3 00:43:21.713 http://cunit.sourceforge.net/ 00:43:21.713 00:43:21.713 00:43:21.713 Suite: bdevio tests on: Nvme0n1 00:43:21.713 Test: blockdev write read block ...passed 00:43:21.713 Test: blockdev write zeroes read block ...passed 00:43:21.713 Test: blockdev write zeroes read no split ...passed 00:43:21.713 Test: blockdev write zeroes read split ...passed 00:43:21.713 Test: blockdev write zeroes read split partial ...passed 00:43:21.713 Test: blockdev reset ...[2024-07-25 01:09:44.306636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:43:21.713 [2024-07-25 01:09:44.311700] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:21.713 passed 00:43:21.713 Test: blockdev write read 8 blocks ...passed 00:43:21.713 Test: blockdev write read size > 128k ...passed 00:43:21.713 Test: blockdev write read invalid size ...passed 00:43:21.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:21.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:21.713 Test: blockdev write read max offset ...passed 00:43:21.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:21.713 Test: blockdev writev readv 8 blocks ...passed 00:43:21.713 Test: blockdev writev readv 30 x 1block ...passed 00:43:21.713 Test: blockdev writev readv block ...passed 00:43:21.713 Test: blockdev writev readv size > 128k ...passed 00:43:21.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:21.713 Test: blockdev comparev and writev ...[2024-07-25 01:09:44.320269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a00d000 len:0x1000 00:43:21.713 [2024-07-25 01:09:44.320414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:43:21.713 passed 00:43:21.713 Test: blockdev nvme passthru rw ...passed 00:43:21.713 Test: blockdev nvme passthru vendor specific ...[2024-07-25 01:09:44.321312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:43:21.713 [2024-07-25 01:09:44.321384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:43:21.713 passed 00:43:21.713 Test: blockdev nvme admin passthru ...passed 00:43:21.713 Test: blockdev copy ...passed 00:43:21.713 00:43:21.713 Run Summary: Type Total Ran Passed Failed Inactive 00:43:21.714 suites 1 1 n/a 0 0 00:43:21.714 tests 23 23 23 0 0 00:43:21.714 asserts 152 152 152 0 n/a 00:43:21.714 00:43:21.714 Elapsed time = 0.227 seconds 00:43:21.714 0 00:43:21.714 01:09:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 168881 00:43:21.714 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 168881 ']' 00:43:21.714 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 168881 00:43:21.714 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:43:21.714 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:21.714 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 168881 00:43:21.972 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:21.972 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:21.972 killing process with pid 168881 00:43:21.972 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 168881' 00:43:21.972 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 168881 00:43:21.972 01:09:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 168881 00:43:23.349 01:09:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:43:23.349 00:43:23.349 real 0m2.436s 00:43:23.349 user 0m5.566s 00:43:23.349 sys 0m0.382s 00:43:23.349 01:09:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:23.349 01:09:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:23.349 ************************************ 00:43:23.349 END TEST bdev_bounds 00:43:23.349 ************************************ 00:43:23.349 01:09:45 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:43:23.349 01:09:45 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:43:23.349 01:09:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:23.349 01:09:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:23.349 ************************************ 00:43:23.349 START TEST bdev_nbd 00:43:23.349 ************************************ 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1') 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:43:23.349 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1') 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=168945 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 168945 /var/tmp/spdk-nbd.sock 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 168945 ']' 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:43:23.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:23.350 01:09:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:23.350 [2024-07-25 01:09:45.757002] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:23.350 [2024-07-25 01:09:45.757164] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:23.350 [2024-07-25 01:09:45.915349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:23.607 [2024-07-25 01:09:46.117826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:43:24.172 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:43:24.429 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:43:24.429 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:43:24.429 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:43:24.429 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:24.430 1+0 records in 00:43:24.430 1+0 records out 00:43:24.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553399 s, 7.4 MB/s 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:43:24.430 01:09:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:24.687 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:43:24.687 { 00:43:24.687 "nbd_device": "/dev/nbd0", 00:43:24.687 "bdev_name": "Nvme0n1" 00:43:24.687 } 00:43:24.687 ]' 00:43:24.687 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:43:24.687 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:43:24.687 { 00:43:24.687 "nbd_device": "/dev/nbd0", 00:43:24.687 "bdev_name": "Nvme0n1" 00:43:24.687 } 00:43:24.687 ]' 00:43:24.687 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:24.688 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:24.945 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:43:25.203 /dev/nbd0 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:25.203 1+0 records in 00:43:25.203 1+0 records out 00:43:25.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463762 s, 8.8 MB/s 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:25.203 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:25.204 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:25.204 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:25.204 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:25.204 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:25.461 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:43:25.461 { 00:43:25.461 "nbd_device": "/dev/nbd0", 00:43:25.461 "bdev_name": "Nvme0n1" 00:43:25.461 } 00:43:25.461 ]' 00:43:25.461 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:25.461 01:09:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:43:25.461 { 00:43:25.461 "nbd_device": "/dev/nbd0", 00:43:25.461 "bdev_name": "Nvme0n1" 00:43:25.461 } 00:43:25.461 ]' 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:43:25.461 256+0 records in 00:43:25.461 256+0 records out 00:43:25.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0088799 s, 118 MB/s 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:25.461 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:43:25.719 256+0 records in 00:43:25.719 256+0 records out 00:43:25.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0595023 s, 17.6 MB/s 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:25.719 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:25.977 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:25.977 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:25.977 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:25.977 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:25.977 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:25.977 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:25.977 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:25.977 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:25.977 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:25.977 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:25.977 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:43:26.235 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:43:26.493 malloc_lvol_verify 00:43:26.493 01:09:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:43:26.767 784d3d56-8888-43a5-9575-9e9694663add 00:43:26.767 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:43:27.048 801dfbc9-2458-4a5c-b5c9-d70b9839bf43 00:43:27.048 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:43:27.048 /dev/nbd0 00:43:27.048 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:43:27.048 mke2fs 1.46.5 (30-Dec-2021) 00:43:27.048 00:43:27.048 Filesystem too small for a journal 00:43:27.048 Discarding device blocks: 0/1024 done 00:43:27.048 Creating filesystem with 1024 4k blocks and 1024 inodes 00:43:27.048 00:43:27.048 Allocating group tables: 0/1 done 00:43:27.048 Writing inode tables: 0/1 done 00:43:27.048 Writing superblocks and filesystem accounting information: 0/1 done 00:43:27.048 00:43:27.048 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:43:27.048 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:27.048 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:27.048 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:27.048 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:27.048 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:27.048 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:27.048 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 168945 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 168945 ']' 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 168945 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 168945 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:27.306 killing process with pid 168945 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 168945' 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 168945 00:43:27.306 01:09:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 168945 00:43:28.679 01:09:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:43:28.679 00:43:28.679 real 0m5.507s 00:43:28.679 user 0m7.616s 00:43:28.679 sys 0m1.220s 00:43:28.679 ************************************ 00:43:28.679 END TEST bdev_nbd 00:43:28.679 ************************************ 00:43:28.679 01:09:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:28.679 01:09:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:28.679 01:09:51 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:43:28.679 01:09:51 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:43:28.679 skipping fio tests on NVMe due to multi-ns failures. 00:43:28.679 01:09:51 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:43:28.679 01:09:51 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:28.679 01:09:51 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:28.679 01:09:51 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:43:28.679 01:09:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:28.679 01:09:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:28.679 ************************************ 00:43:28.679 START TEST bdev_verify 00:43:28.679 ************************************ 00:43:28.679 01:09:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:28.679 [2024-07-25 01:09:51.315030] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:28.679 [2024-07-25 01:09:51.315206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169139 ] 00:43:28.936 [2024-07-25 01:09:51.477735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:29.193 [2024-07-25 01:09:51.660522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:29.193 [2024-07-25 01:09:51.660526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:29.450 Running I/O for 5 seconds... 00:43:34.713 00:43:34.713 Latency(us) 00:43:34.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:34.713 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:34.713 Verification LBA range: start 0x0 length 0xa0000 00:43:34.713 Nvme0n1 : 5.01 11840.05 46.25 0.00 0.00 10758.48 990.84 17226.61 00:43:34.713 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:34.713 Verification LBA range: start 0xa0000 length 0xa0000 00:43:34.713 Nvme0n1 : 5.01 11477.79 44.84 0.00 0.00 11098.40 787.99 21595.67 00:43:34.713 =================================================================================================================== 00:43:34.713 Total : 23317.84 91.09 0.00 0.00 10925.83 787.99 21595.67 00:43:36.089 00:43:36.089 real 0m7.361s 00:43:36.089 user 0m13.527s 00:43:36.089 sys 0m0.268s 00:43:36.089 01:09:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:36.089 ************************************ 00:43:36.089 END TEST bdev_verify 00:43:36.089 ************************************ 00:43:36.089 01:09:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:43:36.089 01:09:58 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:36.089 01:09:58 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:43:36.089 01:09:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:36.089 01:09:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:36.089 ************************************ 00:43:36.089 START TEST bdev_verify_big_io 00:43:36.089 ************************************ 00:43:36.089 01:09:58 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:36.348 [2024-07-25 01:09:58.761669] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:36.348 [2024-07-25 01:09:58.761891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169242 ] 00:43:36.348 [2024-07-25 01:09:58.946012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:36.606 [2024-07-25 01:09:59.143704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:36.606 [2024-07-25 01:09:59.143708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:37.204 Running I/O for 5 seconds... 00:43:42.465 00:43:42.465 Latency(us) 00:43:42.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:42.465 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:42.465 Verification LBA range: start 0x0 length 0xa000 00:43:42.465 Nvme0n1 : 5.07 996.27 62.27 0.00 0.00 125846.57 243.81 234681.30 00:43:42.465 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:42.465 Verification LBA range: start 0xa000 length 0xa000 00:43:42.465 Nvme0n1 : 5.07 982.02 61.38 0.00 0.00 127770.87 199.92 176759.95 00:43:42.465 =================================================================================================================== 00:43:42.465 Total : 1978.29 123.64 0.00 0.00 126801.72 199.92 234681.30 00:43:43.835 00:43:43.835 real 0m7.485s 00:43:43.835 user 0m13.684s 00:43:43.835 sys 0m0.304s 00:43:43.835 01:10:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:43.835 01:10:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:43:43.835 ************************************ 00:43:43.835 END TEST bdev_verify_big_io 00:43:43.835 ************************************ 00:43:43.835 01:10:06 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:43.835 01:10:06 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:43.835 01:10:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:43.835 01:10:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:43.835 ************************************ 00:43:43.835 START TEST bdev_write_zeroes 00:43:43.835 ************************************ 00:43:43.835 01:10:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:43.835 [2024-07-25 01:10:06.327808] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:43.835 [2024-07-25 01:10:06.328687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169345 ] 00:43:44.093 [2024-07-25 01:10:06.514744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:44.093 [2024-07-25 01:10:06.711163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:44.658 Running I/O for 1 seconds... 00:43:45.589 00:43:45.589 Latency(us) 00:43:45.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:45.589 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:45.589 Nvme0n1 : 1.00 67590.21 264.02 0.00 0.00 1889.18 635.86 14043.43 00:43:45.589 =================================================================================================================== 00:43:45.589 Total : 67590.21 264.02 0.00 0.00 1889.18 635.86 14043.43 00:43:46.960 00:43:46.960 real 0m3.101s 00:43:46.960 user 0m2.752s 00:43:46.960 sys 0m0.248s 00:43:46.960 01:10:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:46.960 ************************************ 00:43:46.960 END TEST bdev_write_zeroes 00:43:46.960 ************************************ 00:43:46.960 01:10:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:43:46.960 01:10:09 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:46.960 01:10:09 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:46.960 01:10:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:46.960 01:10:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:46.960 ************************************ 00:43:46.960 START TEST bdev_json_nonenclosed 00:43:46.960 ************************************ 00:43:46.960 01:10:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:46.960 [2024-07-25 01:10:09.464527] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:46.960 [2024-07-25 01:10:09.464690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169409 ] 00:43:47.218 [2024-07-25 01:10:09.623776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:47.218 [2024-07-25 01:10:09.815816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:47.218 [2024-07-25 01:10:09.815920] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:43:47.218 [2024-07-25 01:10:09.815971] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:47.218 [2024-07-25 01:10:09.816001] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:47.784 00:43:47.784 real 0m0.827s 00:43:47.784 user 0m0.583s 00:43:47.784 sys 0m0.144s 00:43:47.784 01:10:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:47.784 ************************************ 00:43:47.784 END TEST bdev_json_nonenclosed 00:43:47.784 ************************************ 00:43:47.784 01:10:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:43:47.784 01:10:10 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:47.784 01:10:10 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:47.784 01:10:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:47.784 01:10:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:47.784 ************************************ 00:43:47.784 START TEST bdev_json_nonarray 00:43:47.784 ************************************ 00:43:47.784 01:10:10 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:47.784 [2024-07-25 01:10:10.376428] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:47.784 [2024-07-25 01:10:10.376652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169438 ] 00:43:48.042 [2024-07-25 01:10:10.552632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:48.300 [2024-07-25 01:10:10.728124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:48.300 [2024-07-25 01:10:10.728247] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:43:48.300 [2024-07-25 01:10:10.728295] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:48.300 [2024-07-25 01:10:10.728322] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:48.558 00:43:48.558 real 0m0.847s 00:43:48.558 user 0m0.578s 00:43:48.558 sys 0m0.170s 00:43:48.558 01:10:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:48.558 ************************************ 00:43:48.558 END TEST bdev_json_nonarray 00:43:48.558 ************************************ 00:43:48.558 01:10:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:43:48.558 01:10:11 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:43:48.558 01:10:11 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:43:48.558 01:10:11 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:43:48.558 01:10:11 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:43:48.558 01:10:11 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:43:48.558 01:10:11 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:43:48.558 01:10:11 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:48.817 01:10:11 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:43:48.817 01:10:11 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:43:48.817 01:10:11 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:43:48.817 01:10:11 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:43:48.817 00:43:48.817 real 0m34.128s 00:43:48.817 user 0m50.283s 00:43:48.817 sys 0m3.853s 00:43:48.817 01:10:11 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:48.817 01:10:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:48.817 ************************************ 00:43:48.817 END TEST blockdev_nvme 00:43:48.817 ************************************ 00:43:48.817 01:10:11 -- spdk/autotest.sh@213 -- # uname -s 00:43:48.817 01:10:11 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:43:48.817 01:10:11 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:43:48.817 01:10:11 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:48.817 01:10:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:48.817 01:10:11 -- common/autotest_common.sh@10 -- # set +x 00:43:48.817 ************************************ 00:43:48.817 START TEST blockdev_nvme_gpt 00:43:48.817 ************************************ 00:43:48.817 01:10:11 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:43:48.817 * Looking for test storage... 00:43:48.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=169525 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 169525 00:43:48.817 01:10:11 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 169525 ']' 00:43:48.817 01:10:11 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:48.817 01:10:11 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:48.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:48.817 01:10:11 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:48.817 01:10:11 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:48.817 01:10:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:48.817 01:10:11 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:43:49.093 [2024-07-25 01:10:11.511184] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:49.093 [2024-07-25 01:10:11.512292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169525 ] 00:43:49.093 [2024-07-25 01:10:11.698562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:49.371 [2024-07-25 01:10:11.872850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:50.302 01:10:12 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:50.302 01:10:12 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:43:50.302 01:10:12 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:43:50.302 01:10:12 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:43:50.303 01:10:12 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:43:50.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:50.560 Waiting for block devices as requested 00:43:50.560 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:43:50.818 01:10:13 blockdev_nvme_gpt -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:43:50.818 01:10:13 blockdev_nvme_gpt -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:43:50.818 01:10:13 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # local nvme bdf 00:43:50.818 01:10:13 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:43:50.818 01:10:13 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:43:50.818 01:10:13 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:43:50.818 01:10:13 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:50.818 01:10:13 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1') 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:43:50.818 BYT; 00:43:50.818 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:43:50.818 BYT; 00:43:50.818 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:43:50.818 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:43:51.076 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:43:51.076 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:43:51.076 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:43:51.076 01:10:13 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:43:51.076 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:43:51.076 01:10:13 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:43:52.448 The operation has completed successfully. 00:43:52.448 01:10:14 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:43:53.381 The operation has completed successfully. 00:43:53.381 01:10:15 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:43:53.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:53.898 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:54.833 [] 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:54.833 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:43:54.833 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:43:54.834 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1p1 00:43:54.834 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:43:54.834 01:10:17 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 169525 00:43:54.834 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 169525 ']' 00:43:54.834 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 169525 00:43:54.834 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:43:54.834 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:54.834 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 169525 00:43:55.091 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:55.091 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:55.091 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 169525' 00:43:55.091 killing process with pid 169525 00:43:55.091 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 169525 00:43:55.091 01:10:17 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 169525 00:43:57.621 01:10:19 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:57.621 01:10:19 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:43:57.621 01:10:19 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:43:57.621 01:10:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:57.621 01:10:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:57.621 ************************************ 00:43:57.621 START TEST bdev_hello_world 00:43:57.621 ************************************ 00:43:57.621 01:10:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:43:57.621 [2024-07-25 01:10:19.947270] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:57.621 [2024-07-25 01:10:19.948291] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169972 ] 00:43:57.621 [2024-07-25 01:10:20.127010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:57.879 [2024-07-25 01:10:20.320138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:58.137 [2024-07-25 01:10:20.776307] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:43:58.137 [2024-07-25 01:10:20.776546] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:43:58.137 [2024-07-25 01:10:20.776630] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:43:58.137 [2024-07-25 01:10:20.779615] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:43:58.137 [2024-07-25 01:10:20.780182] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:43:58.137 [2024-07-25 01:10:20.780317] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:43:58.137 [2024-07-25 01:10:20.780643] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:43:58.137 00:43:58.137 [2024-07-25 01:10:20.780820] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:43:59.512 ************************************ 00:43:59.512 END TEST bdev_hello_world 00:43:59.512 ************************************ 00:43:59.512 00:43:59.512 real 0m2.129s 00:43:59.512 user 0m1.799s 00:43:59.512 sys 0m0.228s 00:43:59.512 01:10:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:59.512 01:10:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:59.512 01:10:22 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:43:59.512 01:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:59.512 01:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:59.512 01:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:59.512 ************************************ 00:43:59.512 START TEST bdev_bounds 00:43:59.512 ************************************ 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=170017 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 170017' 00:43:59.512 Process bdevio pid: 170017 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 170017 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 170017 ']' 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:59.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:59.512 01:10:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:59.512 [2024-07-25 01:10:22.154567] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:43:59.512 [2024-07-25 01:10:22.155219] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170017 ] 00:43:59.770 [2024-07-25 01:10:22.343810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:00.027 [2024-07-25 01:10:22.536584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:00.027 [2024-07-25 01:10:22.536763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:00.027 [2024-07-25 01:10:22.536768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:44:00.607 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:00.607 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:44:00.607 01:10:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:44:00.607 I/O targets: 00:44:00.607 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:44:00.607 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:44:00.607 00:44:00.607 00:44:00.607 CUnit - A unit testing framework for C - Version 2.1-3 00:44:00.607 http://cunit.sourceforge.net/ 00:44:00.607 00:44:00.607 00:44:00.607 Suite: bdevio tests on: Nvme0n1p2 00:44:00.607 Test: blockdev write read block ...passed 00:44:00.607 Test: blockdev write zeroes read block ...passed 00:44:00.607 Test: blockdev write zeroes read no split ...passed 00:44:00.607 Test: blockdev write zeroes read split ...passed 00:44:00.607 Test: blockdev write zeroes read split partial ...passed 00:44:00.607 Test: blockdev reset ...[2024-07-25 01:10:23.189791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:44:00.607 [2024-07-25 01:10:23.193491] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:00.607 passed 00:44:00.607 Test: blockdev write read 8 blocks ...passed 00:44:00.607 Test: blockdev write read size > 128k ...passed 00:44:00.607 Test: blockdev write read invalid size ...passed 00:44:00.607 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:44:00.607 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:44:00.607 Test: blockdev write read max offset ...passed 00:44:00.607 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:44:00.607 Test: blockdev writev readv 8 blocks ...passed 00:44:00.607 Test: blockdev writev readv 30 x 1block ...passed 00:44:00.607 Test: blockdev writev readv block ...passed 00:44:00.607 Test: blockdev writev readv size > 128k ...passed 00:44:00.607 Test: blockdev writev readv size > 128k in two iovs ...passed 00:44:00.607 Test: blockdev comparev and writev ...[2024-07-25 01:10:23.203328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x1ba0d000 len:0x1000 00:44:00.607 [2024-07-25 01:10:23.203498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:44:00.607 passed 00:44:00.607 Test: blockdev nvme passthru rw ...passed 00:44:00.607 Test: blockdev nvme passthru vendor specific ...passed 00:44:00.607 Test: blockdev nvme admin passthru ...passed 00:44:00.607 Test: blockdev copy ...passed 00:44:00.607 Suite: bdevio tests on: Nvme0n1p1 00:44:00.607 Test: blockdev write read block ...passed 00:44:00.607 Test: blockdev write zeroes read block ...passed 00:44:00.607 Test: blockdev write zeroes read no split ...passed 00:44:00.607 Test: blockdev write zeroes read split ...passed 00:44:00.885 Test: blockdev write zeroes read split partial ...passed 00:44:00.885 Test: blockdev reset ...[2024-07-25 01:10:23.268041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:44:00.885 [2024-07-25 01:10:23.271501] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:00.885 passed 00:44:00.885 Test: blockdev write read 8 blocks ...passed 00:44:00.885 Test: blockdev write read size > 128k ...passed 00:44:00.885 Test: blockdev write read invalid size ...passed 00:44:00.885 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:44:00.885 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:44:00.885 Test: blockdev write read max offset ...passed 00:44:00.885 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:44:00.885 Test: blockdev writev readv 8 blocks ...passed 00:44:00.885 Test: blockdev writev readv 30 x 1block ...passed 00:44:00.885 Test: blockdev writev readv block ...passed 00:44:00.885 Test: blockdev writev readv size > 128k ...passed 00:44:00.885 Test: blockdev writev readv size > 128k in two iovs ...passed 00:44:00.885 Test: blockdev comparev and writev ...[2024-07-25 01:10:23.281055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x1ba09000 len:0x1000 00:44:00.885 [2024-07-25 01:10:23.281216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:44:00.885 passed 00:44:00.885 Test: blockdev nvme passthru rw ...passed 00:44:00.885 Test: blockdev nvme passthru vendor specific ...passed 00:44:00.885 Test: blockdev nvme admin passthru ...passed 00:44:00.885 Test: blockdev copy ...passed 00:44:00.885 00:44:00.885 Run Summary: Type Total Ran Passed Failed Inactive 00:44:00.885 suites 2 2 n/a 0 0 00:44:00.886 tests 46 46 46 0 0 00:44:00.886 asserts 284 284 284 0 n/a 00:44:00.886 00:44:00.886 Elapsed time = 0.440 seconds 00:44:00.886 0 00:44:00.886 01:10:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 170017 00:44:00.886 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 170017 ']' 00:44:00.886 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 170017 00:44:00.886 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:44:00.886 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:00.886 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 170017 00:44:00.886 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:44:00.886 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:44:00.886 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 170017' 00:44:00.886 killing process with pid 170017 00:44:00.886 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 170017 00:44:00.886 01:10:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 170017 00:44:01.817 ************************************ 00:44:01.817 END TEST bdev_bounds 00:44:01.817 ************************************ 00:44:01.817 01:10:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:44:01.817 00:44:01.817 real 0m2.388s 00:44:01.817 user 0m5.451s 00:44:01.817 sys 0m0.376s 00:44:01.817 01:10:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:01.817 01:10:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:44:02.075 01:10:24 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:44:02.075 01:10:24 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:44:02.075 01:10:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:02.075 01:10:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:02.075 ************************************ 00:44:02.075 START TEST bdev_nbd 00:44:02.075 ************************************ 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=170079 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 170079 /var/tmp/spdk-nbd.sock 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 170079 ']' 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:44:02.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:02.075 01:10:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:44:02.075 [2024-07-25 01:10:24.621834] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:44:02.075 [2024-07-25 01:10:24.622263] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:02.332 [2024-07-25 01:10:24.798538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:02.590 [2024-07-25 01:10:24.985479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:44:03.155 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:03.413 1+0 records in 00:44:03.413 1+0 records out 00:44:03.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413719 s, 9.9 MB/s 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:44:03.413 01:10:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:03.671 1+0 records in 00:44:03.671 1+0 records out 00:44:03.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000830756 s, 4.9 MB/s 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:44:03.671 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:03.928 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:44:03.928 { 00:44:03.928 "nbd_device": "/dev/nbd0", 00:44:03.928 "bdev_name": "Nvme0n1p1" 00:44:03.928 }, 00:44:03.928 { 00:44:03.928 "nbd_device": "/dev/nbd1", 00:44:03.928 "bdev_name": "Nvme0n1p2" 00:44:03.928 } 00:44:03.928 ]' 00:44:03.928 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:44:03.928 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:44:03.928 { 00:44:03.928 "nbd_device": "/dev/nbd0", 00:44:03.928 "bdev_name": "Nvme0n1p1" 00:44:03.928 }, 00:44:03.928 { 00:44:03.928 "nbd_device": "/dev/nbd1", 00:44:03.928 "bdev_name": "Nvme0n1p2" 00:44:03.928 } 00:44:03.928 ]' 00:44:03.928 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:44:03.928 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:44:03.928 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:03.928 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:03.928 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:03.928 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:44:03.928 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:03.928 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:04.185 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:04.185 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:04.185 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:04.185 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:04.185 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:04.185 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:04.185 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:04.185 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:04.185 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:04.185 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:44:04.442 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:44:04.442 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:44:04.442 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:44:04.442 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:04.442 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:04.442 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:44:04.442 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:04.442 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:04.442 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:04.442 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:04.442 01:10:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:44:04.700 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:44:04.957 /dev/nbd0 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:04.957 1+0 records in 00:44:04.957 1+0 records out 00:44:04.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633371 s, 6.5 MB/s 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:44:04.957 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:44:05.214 /dev/nbd1 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:05.214 1+0 records in 00:44:05.214 1+0 records out 00:44:05.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000931727 s, 4.4 MB/s 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:05.214 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:44:05.215 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:05.215 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:44:05.215 01:10:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:44:05.215 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:05.215 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:44:05.215 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:05.215 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:05.215 01:10:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:44:05.472 { 00:44:05.472 "nbd_device": "/dev/nbd0", 00:44:05.472 "bdev_name": "Nvme0n1p1" 00:44:05.472 }, 00:44:05.472 { 00:44:05.472 "nbd_device": "/dev/nbd1", 00:44:05.472 "bdev_name": "Nvme0n1p2" 00:44:05.472 } 00:44:05.472 ]' 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:44:05.472 { 00:44:05.472 "nbd_device": "/dev/nbd0", 00:44:05.472 "bdev_name": "Nvme0n1p1" 00:44:05.472 }, 00:44:05.472 { 00:44:05.472 "nbd_device": "/dev/nbd1", 00:44:05.472 "bdev_name": "Nvme0n1p2" 00:44:05.472 } 00:44:05.472 ]' 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:44:05.472 /dev/nbd1' 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:44:05.472 /dev/nbd1' 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:44:05.472 256+0 records in 00:44:05.472 256+0 records out 00:44:05.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117255 s, 89.4 MB/s 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:44:05.472 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:44:05.729 256+0 records in 00:44:05.729 256+0 records out 00:44:05.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0746129 s, 14.1 MB/s 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:44:05.729 256+0 records in 00:44:05.729 256+0 records out 00:44:05.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0786186 s, 13.3 MB/s 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:05.729 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:05.986 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:05.986 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:05.986 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:05.986 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:05.986 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:05.986 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:05.986 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:05.986 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:05.986 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:05.987 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:44:06.245 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:44:06.245 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:44:06.245 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:44:06.245 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:06.245 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:06.245 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:44:06.245 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:06.245 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:06.245 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:06.245 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:06.245 01:10:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:06.504 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:44:06.504 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:44:06.504 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:06.504 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:44:06.504 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:44:06.504 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:06.504 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:44:06.504 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:44:06.504 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:44:06.763 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:44:06.763 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:44:06.763 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:44:06.763 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:44:06.763 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:06.763 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:06.763 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:44:06.763 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:44:06.763 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:44:07.022 malloc_lvol_verify 00:44:07.022 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:44:07.022 e15efdd7-0b2f-4393-a7c6-fe2ea9312149 00:44:07.022 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:44:07.281 254f6e5a-6185-48c6-a2c1-0baa694eb272 00:44:07.281 01:10:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:44:07.540 /dev/nbd0 00:44:07.540 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:44:07.540 mke2fs 1.46.5 (30-Dec-2021) 00:44:07.540 00:44:07.540 Filesystem too small for a journal 00:44:07.540 Discarding device blocks: 0/1024 done 00:44:07.540 Creating filesystem with 1024 4k blocks and 1024 inodes 00:44:07.540 00:44:07.540 Allocating group tables: 0/1 done 00:44:07.540 Writing inode tables: 0/1 done 00:44:07.540 Writing superblocks and filesystem accounting information: 0/1 done 00:44:07.540 00:44:07.540 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:44:07.540 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:07.540 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:07.540 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:07.540 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:07.540 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:44:07.540 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:07.540 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 170079 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 170079 ']' 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 170079 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 170079 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 170079' 00:44:07.799 killing process with pid 170079 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 170079 00:44:07.799 01:10:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 170079 00:44:09.172 01:10:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:44:09.172 00:44:09.172 real 0m7.137s 00:44:09.172 user 0m9.759s 00:44:09.172 sys 0m2.002s 00:44:09.172 01:10:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:09.172 ************************************ 00:44:09.172 END TEST bdev_nbd 00:44:09.172 ************************************ 00:44:09.172 01:10:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:44:09.172 01:10:31 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:44:09.172 01:10:31 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:44:09.172 01:10:31 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:44:09.172 01:10:31 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:44:09.172 skipping fio tests on NVMe due to multi-ns failures. 00:44:09.173 01:10:31 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:44:09.173 01:10:31 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:09.173 01:10:31 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:44:09.173 01:10:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:09.173 01:10:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:09.173 ************************************ 00:44:09.173 START TEST bdev_verify 00:44:09.173 ************************************ 00:44:09.173 01:10:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:09.173 [2024-07-25 01:10:31.819996] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:44:09.173 [2024-07-25 01:10:31.820407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170333 ] 00:44:09.434 [2024-07-25 01:10:32.002791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:09.713 [2024-07-25 01:10:32.186324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:09.713 [2024-07-25 01:10:32.186330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:10.277 Running I/O for 5 seconds... 00:44:15.540 00:44:15.540 Latency(us) 00:44:15.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:15.540 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.540 Verification LBA range: start 0x0 length 0x4ff80 00:44:15.540 Nvme0n1p1 : 5.02 4945.03 19.32 0.00 0.00 25807.18 3900.95 27088.21 00:44:15.540 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.540 Verification LBA range: start 0x4ff80 length 0x4ff80 00:44:15.540 Nvme0n1p1 : 5.02 4968.16 19.41 0.00 0.00 25689.26 3448.44 28461.35 00:44:15.540 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.540 Verification LBA range: start 0x0 length 0x4ff7f 00:44:15.540 Nvme0n1p2 : 5.02 4943.29 19.31 0.00 0.00 25767.57 3495.25 26464.06 00:44:15.540 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.540 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:44:15.540 Nvme0n1p2 : 5.03 4966.75 19.40 0.00 0.00 25648.05 3261.20 22968.81 00:44:15.540 =================================================================================================================== 00:44:15.540 Total : 19823.23 77.43 0.00 0.00 25727.86 3261.20 28461.35 00:44:16.476 ************************************ 00:44:16.476 END TEST bdev_verify 00:44:16.476 ************************************ 00:44:16.476 00:44:16.476 real 0m7.285s 00:44:16.476 user 0m13.325s 00:44:16.476 sys 0m0.276s 00:44:16.476 01:10:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:16.476 01:10:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:44:16.476 01:10:39 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:16.476 01:10:39 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:44:16.476 01:10:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:16.476 01:10:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:16.476 ************************************ 00:44:16.476 START TEST bdev_verify_big_io 00:44:16.476 ************************************ 00:44:16.476 01:10:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:16.735 [2024-07-25 01:10:39.158219] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:44:16.735 [2024-07-25 01:10:39.158552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170437 ] 00:44:16.735 [2024-07-25 01:10:39.322046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:16.994 [2024-07-25 01:10:39.502881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:16.994 [2024-07-25 01:10:39.502883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:17.562 Running I/O for 5 seconds... 00:44:22.831 00:44:22.831 Latency(us) 00:44:22.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:22.832 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:22.832 Verification LBA range: start 0x0 length 0x4ff8 00:44:22.832 Nvme0n1p1 : 5.21 380.91 23.81 0.00 0.00 330021.66 1927.07 409443.96 00:44:22.832 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:22.832 Verification LBA range: start 0x4ff8 length 0x4ff8 00:44:22.832 Nvme0n1p1 : 5.23 404.00 25.25 0.00 0.00 308400.80 12046.14 347528.05 00:44:22.832 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:22.832 Verification LBA range: start 0x0 length 0x4ff7 00:44:22.832 Nvme0n1p2 : 5.21 373.40 23.34 0.00 0.00 327332.48 4899.60 469362.59 00:44:22.832 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:22.832 Verification LBA range: start 0x4ff7 length 0x4ff7 00:44:22.832 Nvme0n1p2 : 5.24 414.52 25.91 0.00 0.00 294670.71 436.91 359511.77 00:44:22.832 =================================================================================================================== 00:44:22.832 Total : 1572.84 98.30 0.00 0.00 314480.04 436.91 469362.59 00:44:24.210 00:44:24.210 real 0m7.711s 00:44:24.210 user 0m14.212s 00:44:24.210 sys 0m0.276s 00:44:24.210 ************************************ 00:44:24.210 END TEST bdev_verify_big_io 00:44:24.210 ************************************ 00:44:24.210 01:10:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:24.210 01:10:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:44:24.470 01:10:46 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:24.470 01:10:46 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:24.470 01:10:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:24.470 01:10:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:24.470 ************************************ 00:44:24.470 START TEST bdev_write_zeroes 00:44:24.470 ************************************ 00:44:24.470 01:10:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:24.470 [2024-07-25 01:10:46.958859] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:44:24.470 [2024-07-25 01:10:46.959490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170548 ] 00:44:24.729 [2024-07-25 01:10:47.139716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:24.729 [2024-07-25 01:10:47.332740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:25.297 Running I/O for 1 seconds... 00:44:26.231 00:44:26.231 Latency(us) 00:44:26.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:26.231 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:26.231 Nvme0n1p1 : 1.01 28381.94 110.87 0.00 0.00 4500.82 2574.63 16227.96 00:44:26.231 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:26.231 Nvme0n1p2 : 1.01 28349.75 110.74 0.00 0.00 4500.29 2418.59 17850.76 00:44:26.231 =================================================================================================================== 00:44:26.231 Total : 56731.69 221.61 0.00 0.00 4500.56 2418.59 17850.76 00:44:27.608 ************************************ 00:44:27.608 END TEST bdev_write_zeroes 00:44:27.608 ************************************ 00:44:27.608 00:44:27.608 real 0m2.971s 00:44:27.608 user 0m2.614s 00:44:27.608 sys 0m0.256s 00:44:27.608 01:10:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:27.608 01:10:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:44:27.608 01:10:49 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:27.608 01:10:49 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:27.608 01:10:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:27.608 01:10:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:27.608 ************************************ 00:44:27.608 START TEST bdev_json_nonenclosed 00:44:27.608 ************************************ 00:44:27.608 01:10:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:27.608 [2024-07-25 01:10:49.982753] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:44:27.608 [2024-07-25 01:10:49.983051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170605 ] 00:44:27.608 [2024-07-25 01:10:50.143116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:27.867 [2024-07-25 01:10:50.327054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:27.867 [2024-07-25 01:10:50.327322] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:44:27.867 [2024-07-25 01:10:50.327452] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:27.867 [2024-07-25 01:10:50.327558] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:28.126 00:44:28.126 real 0m0.816s 00:44:28.126 user 0m0.567s 00:44:28.126 sys 0m0.145s 00:44:28.126 01:10:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:28.126 01:10:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:44:28.126 ************************************ 00:44:28.126 END TEST bdev_json_nonenclosed 00:44:28.126 ************************************ 00:44:28.385 01:10:50 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:28.385 01:10:50 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:28.385 01:10:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:28.385 01:10:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:28.385 ************************************ 00:44:28.385 START TEST bdev_json_nonarray 00:44:28.385 ************************************ 00:44:28.385 01:10:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:28.385 [2024-07-25 01:10:50.868951] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:44:28.385 [2024-07-25 01:10:50.869286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170643 ] 00:44:28.385 [2024-07-25 01:10:51.025318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:28.644 [2024-07-25 01:10:51.203920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:28.644 [2024-07-25 01:10:51.204215] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:44:28.644 [2024-07-25 01:10:51.204338] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:28.644 [2024-07-25 01:10:51.204392] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:29.211 00:44:29.211 real 0m0.802s 00:44:29.211 user 0m0.551s 00:44:29.211 sys 0m0.152s 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:44:29.211 ************************************ 00:44:29.211 END TEST bdev_json_nonarray 00:44:29.211 ************************************ 00:44:29.211 01:10:51 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:44:29.211 01:10:51 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:44:29.211 01:10:51 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:44:29.211 01:10:51 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:29.211 01:10:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:29.211 01:10:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:29.211 ************************************ 00:44:29.211 START TEST bdev_gpt_uuid 00:44:29.211 ************************************ 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=170673 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 170673 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 170673 ']' 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:29.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:29.211 01:10:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:29.211 [2024-07-25 01:10:51.780022] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:44:29.211 [2024-07-25 01:10:51.780225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170673 ] 00:44:29.469 [2024-07-25 01:10:51.957396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:29.727 [2024-07-25 01:10:52.130136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:30.293 01:10:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:30.294 01:10:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:44:30.294 01:10:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:30.294 01:10:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:30.294 01:10:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:30.553 Some configs were skipped because the RPC state that can call them passed over. 00:44:30.553 01:10:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:30.553 01:10:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:44:30.553 01:10:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:30.553 01:10:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:44:30.553 { 00:44:30.553 "name": "Nvme0n1p1", 00:44:30.553 "aliases": [ 00:44:30.553 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:44:30.553 ], 00:44:30.553 "product_name": "GPT Disk", 00:44:30.553 "block_size": 4096, 00:44:30.553 "num_blocks": 655104, 00:44:30.553 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:44:30.553 "assigned_rate_limits": { 00:44:30.553 "rw_ios_per_sec": 0, 00:44:30.553 "rw_mbytes_per_sec": 0, 00:44:30.553 "r_mbytes_per_sec": 0, 00:44:30.553 "w_mbytes_per_sec": 0 00:44:30.553 }, 00:44:30.553 "claimed": false, 00:44:30.553 "zoned": false, 00:44:30.553 "supported_io_types": { 00:44:30.553 "read": true, 00:44:30.553 "write": true, 00:44:30.553 "unmap": true, 00:44:30.553 "flush": true, 00:44:30.553 "reset": true, 00:44:30.553 "nvme_admin": false, 00:44:30.553 "nvme_io": false, 00:44:30.553 "nvme_io_md": false, 00:44:30.553 "write_zeroes": true, 00:44:30.553 "zcopy": false, 00:44:30.553 "get_zone_info": false, 00:44:30.553 "zone_management": false, 00:44:30.553 "zone_append": false, 00:44:30.553 "compare": true, 00:44:30.553 "compare_and_write": false, 00:44:30.553 "abort": true, 00:44:30.553 "seek_hole": false, 00:44:30.553 "seek_data": false, 00:44:30.553 "copy": true, 00:44:30.553 "nvme_iov_md": false 00:44:30.553 }, 00:44:30.553 "driver_specific": { 00:44:30.553 "gpt": { 00:44:30.553 "base_bdev": "Nvme0n1", 00:44:30.553 "offset_blocks": 256, 00:44:30.553 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:44:30.553 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:44:30.553 "partition_name": "SPDK_TEST_first" 00:44:30.553 } 00:44:30.553 } 00:44:30.553 } 00:44:30.553 ]' 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:44:30.553 { 00:44:30.553 "name": "Nvme0n1p2", 00:44:30.553 "aliases": [ 00:44:30.553 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:44:30.553 ], 00:44:30.553 "product_name": "GPT Disk", 00:44:30.553 "block_size": 4096, 00:44:30.553 "num_blocks": 655103, 00:44:30.553 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:44:30.553 "assigned_rate_limits": { 00:44:30.553 "rw_ios_per_sec": 0, 00:44:30.553 "rw_mbytes_per_sec": 0, 00:44:30.553 "r_mbytes_per_sec": 0, 00:44:30.553 "w_mbytes_per_sec": 0 00:44:30.553 }, 00:44:30.553 "claimed": false, 00:44:30.553 "zoned": false, 00:44:30.553 "supported_io_types": { 00:44:30.553 "read": true, 00:44:30.553 "write": true, 00:44:30.553 "unmap": true, 00:44:30.553 "flush": true, 00:44:30.553 "reset": true, 00:44:30.553 "nvme_admin": false, 00:44:30.553 "nvme_io": false, 00:44:30.553 "nvme_io_md": false, 00:44:30.553 "write_zeroes": true, 00:44:30.553 "zcopy": false, 00:44:30.553 "get_zone_info": false, 00:44:30.553 "zone_management": false, 00:44:30.553 "zone_append": false, 00:44:30.553 "compare": true, 00:44:30.553 "compare_and_write": false, 00:44:30.553 "abort": true, 00:44:30.553 "seek_hole": false, 00:44:30.553 "seek_data": false, 00:44:30.553 "copy": true, 00:44:30.553 "nvme_iov_md": false 00:44:30.553 }, 00:44:30.553 "driver_specific": { 00:44:30.553 "gpt": { 00:44:30.553 "base_bdev": "Nvme0n1", 00:44:30.553 "offset_blocks": 655360, 00:44:30.553 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:44:30.553 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:44:30.553 "partition_name": "SPDK_TEST_second" 00:44:30.553 } 00:44:30.553 } 00:44:30.553 } 00:44:30.553 ]' 00:44:30.553 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 170673 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 170673 ']' 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 170673 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 170673 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:44:30.812 killing process with pid 170673 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 170673' 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 170673 00:44:30.812 01:10:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 170673 00:44:33.344 00:44:33.344 real 0m3.968s 00:44:33.344 user 0m4.137s 00:44:33.344 sys 0m0.496s 00:44:33.344 01:10:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:33.344 01:10:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:33.344 ************************************ 00:44:33.344 END TEST bdev_gpt_uuid 00:44:33.344 ************************************ 00:44:33.344 01:10:55 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:44:33.344 01:10:55 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:44:33.344 01:10:55 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:44:33.344 01:10:55 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:44:33.344 01:10:55 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:33.344 01:10:55 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:44:33.344 01:10:55 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:44:33.344 01:10:55 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:44:33.344 01:10:55 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:44:33.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:33.603 Waiting for block devices as requested 00:44:33.603 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:33.861 01:10:56 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:44:33.861 01:10:56 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:44:33.861 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:44:33.861 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:44:33.861 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:44:33.861 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:44:33.861 01:10:56 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:44:33.861 00:44:33.861 real 0m45.059s 00:44:33.861 user 1m2.037s 00:44:33.861 sys 0m7.166s 00:44:33.861 01:10:56 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:33.861 01:10:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:33.861 ************************************ 00:44:33.861 END TEST blockdev_nvme_gpt 00:44:33.861 ************************************ 00:44:33.861 01:10:56 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:44:33.861 01:10:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:33.861 01:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:33.861 01:10:56 -- common/autotest_common.sh@10 -- # set +x 00:44:33.861 ************************************ 00:44:33.861 START TEST nvme 00:44:33.861 ************************************ 00:44:33.861 01:10:56 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:44:33.861 * Looking for test storage... 00:44:33.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:33.861 01:10:56 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:34.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:34.686 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:44:35.621 01:10:58 nvme -- nvme/nvme.sh@79 -- # uname 00:44:35.621 01:10:58 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:44:35.621 01:10:58 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:44:35.621 01:10:58 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:44:35.621 01:10:58 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:44:35.621 01:10:58 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:44:35.621 01:10:58 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:44:35.621 01:10:58 nvme -- common/autotest_common.sh@1069 -- # stubpid=171096 00:44:35.621 01:10:58 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:44:35.621 Waiting for stub to ready for secondary processes... 00:44:35.621 01:10:58 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:44:35.621 01:10:58 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/171096 ]] 00:44:35.621 01:10:58 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:44:35.621 01:10:58 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:44:35.621 [2024-07-25 01:10:58.127658] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:44:35.621 [2024-07-25 01:10:58.127876] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:44:36.555 01:10:59 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:44:36.555 01:10:59 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/171096 ]] 00:44:36.555 01:10:59 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:44:36.555 [2024-07-25 01:10:59.152079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:36.813 [2024-07-25 01:10:59.426392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:36.813 [2024-07-25 01:10:59.426273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:44:36.813 [2024-07-25 01:10:59.426393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:44:36.813 [2024-07-25 01:10:59.436467] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:44:36.813 [2024-07-25 01:10:59.436573] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:44:36.813 [2024-07-25 01:10:59.446571] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:44:36.813 [2024-07-25 01:10:59.447422] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:44:37.748 01:11:00 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:44:37.748 01:11:00 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:44:37.748 done. 00:44:37.748 01:11:00 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:44:37.748 01:11:00 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:44:37.748 01:11:00 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:37.748 01:11:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:37.748 ************************************ 00:44:37.748 START TEST nvme_reset 00:44:37.748 ************************************ 00:44:37.748 01:11:00 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:44:38.007 Initializing NVMe Controllers 00:44:38.007 Skipping QEMU NVMe SSD at 0000:00:10.0 00:44:38.007 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:44:38.007 00:44:38.007 real 0m0.334s 00:44:38.007 user 0m0.100s 00:44:38.007 sys 0m0.172s 00:44:38.007 01:11:00 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:38.007 01:11:00 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:44:38.007 ************************************ 00:44:38.007 END TEST nvme_reset 00:44:38.007 ************************************ 00:44:38.007 01:11:00 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:44:38.007 01:11:00 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:38.007 01:11:00 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:38.007 01:11:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:38.007 ************************************ 00:44:38.007 START TEST nvme_identify 00:44:38.007 ************************************ 00:44:38.007 01:11:00 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:44:38.007 01:11:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:44:38.007 01:11:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:44:38.007 01:11:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:44:38.007 01:11:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:44:38.007 01:11:00 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # bdfs=() 00:44:38.007 01:11:00 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # local bdfs 00:44:38.007 01:11:00 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:38.007 01:11:00 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:38.007 01:11:00 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:44:38.007 01:11:00 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:44:38.007 01:11:00 nvme.nvme_identify -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 00:44:38.007 01:11:00 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:44:38.267 [2024-07-25 01:11:00.853512] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 171132 terminated unexpected 00:44:38.267 ===================================================== 00:44:38.267 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:38.267 ===================================================== 00:44:38.267 Controller Capabilities/Features 00:44:38.267 ================================ 00:44:38.267 Vendor ID: 1b36 00:44:38.267 Subsystem Vendor ID: 1af4 00:44:38.267 Serial Number: 12340 00:44:38.267 Model Number: QEMU NVMe Ctrl 00:44:38.267 Firmware Version: 8.0.0 00:44:38.267 Recommended Arb Burst: 6 00:44:38.267 IEEE OUI Identifier: 00 54 52 00:44:38.267 Multi-path I/O 00:44:38.267 May have multiple subsystem ports: No 00:44:38.267 May have multiple controllers: No 00:44:38.267 Associated with SR-IOV VF: No 00:44:38.267 Max Data Transfer Size: 524288 00:44:38.267 Max Number of Namespaces: 256 00:44:38.267 Max Number of I/O Queues: 64 00:44:38.267 NVMe Specification Version (VS): 1.4 00:44:38.267 NVMe Specification Version (Identify): 1.4 00:44:38.267 Maximum Queue Entries: 2048 00:44:38.267 Contiguous Queues Required: Yes 00:44:38.267 Arbitration Mechanisms Supported 00:44:38.267 Weighted Round Robin: Not Supported 00:44:38.267 Vendor Specific: Not Supported 00:44:38.267 Reset Timeout: 7500 ms 00:44:38.267 Doorbell Stride: 4 bytes 00:44:38.267 NVM Subsystem Reset: Not Supported 00:44:38.267 Command Sets Supported 00:44:38.267 NVM Command Set: Supported 00:44:38.267 Boot Partition: Not Supported 00:44:38.267 Memory Page Size Minimum: 4096 bytes 00:44:38.267 Memory Page Size Maximum: 65536 bytes 00:44:38.267 Persistent Memory Region: Not Supported 00:44:38.267 Optional Asynchronous Events Supported 00:44:38.267 Namespace Attribute Notices: Supported 00:44:38.267 Firmware Activation Notices: Not Supported 00:44:38.267 ANA Change Notices: Not Supported 00:44:38.267 PLE Aggregate Log Change Notices: Not Supported 00:44:38.267 LBA Status Info Alert Notices: Not Supported 00:44:38.267 EGE Aggregate Log Change Notices: Not Supported 00:44:38.267 Normal NVM Subsystem Shutdown event: Not Supported 00:44:38.267 Zone Descriptor Change Notices: Not Supported 00:44:38.267 Discovery Log Change Notices: Not Supported 00:44:38.267 Controller Attributes 00:44:38.267 128-bit Host Identifier: Not Supported 00:44:38.267 Non-Operational Permissive Mode: Not Supported 00:44:38.267 NVM Sets: Not Supported 00:44:38.267 Read Recovery Levels: Not Supported 00:44:38.267 Endurance Groups: Not Supported 00:44:38.267 Predictable Latency Mode: Not Supported 00:44:38.267 Traffic Based Keep ALive: Not Supported 00:44:38.267 Namespace Granularity: Not Supported 00:44:38.267 SQ Associations: Not Supported 00:44:38.267 UUID List: Not Supported 00:44:38.267 Multi-Domain Subsystem: Not Supported 00:44:38.267 Fixed Capacity Management: Not Supported 00:44:38.267 Variable Capacity Management: Not Supported 00:44:38.267 Delete Endurance Group: Not Supported 00:44:38.267 Delete NVM Set: Not Supported 00:44:38.267 Extended LBA Formats Supported: Supported 00:44:38.267 Flexible Data Placement Supported: Not Supported 00:44:38.267 00:44:38.267 Controller Memory Buffer Support 00:44:38.267 ================================ 00:44:38.267 Supported: No 00:44:38.267 00:44:38.267 Persistent Memory Region Support 00:44:38.267 ================================ 00:44:38.267 Supported: No 00:44:38.267 00:44:38.267 Admin Command Set Attributes 00:44:38.267 ============================ 00:44:38.267 Security Send/Receive: Not Supported 00:44:38.267 Format NVM: Supported 00:44:38.267 Firmware Activate/Download: Not Supported 00:44:38.267 Namespace Management: Supported 00:44:38.267 Device Self-Test: Not Supported 00:44:38.267 Directives: Supported 00:44:38.267 NVMe-MI: Not Supported 00:44:38.267 Virtualization Management: Not Supported 00:44:38.267 Doorbell Buffer Config: Supported 00:44:38.267 Get LBA Status Capability: Not Supported 00:44:38.267 Command & Feature Lockdown Capability: Not Supported 00:44:38.267 Abort Command Limit: 4 00:44:38.267 Async Event Request Limit: 4 00:44:38.267 Number of Firmware Slots: N/A 00:44:38.267 Firmware Slot 1 Read-Only: N/A 00:44:38.267 Firmware Activation Without Reset: N/A 00:44:38.267 Multiple Update Detection Support: N/A 00:44:38.267 Firmware Update Granularity: No Information Provided 00:44:38.267 Per-Namespace SMART Log: Yes 00:44:38.267 Asymmetric Namespace Access Log Page: Not Supported 00:44:38.267 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:44:38.267 Command Effects Log Page: Supported 00:44:38.267 Get Log Page Extended Data: Supported 00:44:38.267 Telemetry Log Pages: Not Supported 00:44:38.267 Persistent Event Log Pages: Not Supported 00:44:38.267 Supported Log Pages Log Page: May Support 00:44:38.267 Commands Supported & Effects Log Page: Not Supported 00:44:38.267 Feature Identifiers & Effects Log Page:May Support 00:44:38.267 NVMe-MI Commands & Effects Log Page: May Support 00:44:38.267 Data Area 4 for Telemetry Log: Not Supported 00:44:38.267 Error Log Page Entries Supported: 1 00:44:38.267 Keep Alive: Not Supported 00:44:38.267 00:44:38.267 NVM Command Set Attributes 00:44:38.267 ========================== 00:44:38.267 Submission Queue Entry Size 00:44:38.267 Max: 64 00:44:38.267 Min: 64 00:44:38.267 Completion Queue Entry Size 00:44:38.267 Max: 16 00:44:38.267 Min: 16 00:44:38.267 Number of Namespaces: 256 00:44:38.267 Compare Command: Supported 00:44:38.267 Write Uncorrectable Command: Not Supported 00:44:38.267 Dataset Management Command: Supported 00:44:38.267 Write Zeroes Command: Supported 00:44:38.267 Set Features Save Field: Supported 00:44:38.267 Reservations: Not Supported 00:44:38.267 Timestamp: Supported 00:44:38.267 Copy: Supported 00:44:38.267 Volatile Write Cache: Present 00:44:38.267 Atomic Write Unit (Normal): 1 00:44:38.267 Atomic Write Unit (PFail): 1 00:44:38.267 Atomic Compare & Write Unit: 1 00:44:38.267 Fused Compare & Write: Not Supported 00:44:38.267 Scatter-Gather List 00:44:38.267 SGL Command Set: Supported 00:44:38.267 SGL Keyed: Not Supported 00:44:38.267 SGL Bit Bucket Descriptor: Not Supported 00:44:38.267 SGL Metadata Pointer: Not Supported 00:44:38.267 Oversized SGL: Not Supported 00:44:38.267 SGL Metadata Address: Not Supported 00:44:38.267 SGL Offset: Not Supported 00:44:38.267 Transport SGL Data Block: Not Supported 00:44:38.267 Replay Protected Memory Block: Not Supported 00:44:38.267 00:44:38.267 Firmware Slot Information 00:44:38.267 ========================= 00:44:38.267 Active slot: 1 00:44:38.267 Slot 1 Firmware Revision: 1.0 00:44:38.267 00:44:38.267 00:44:38.267 Commands Supported and Effects 00:44:38.267 ============================== 00:44:38.267 Admin Commands 00:44:38.267 -------------- 00:44:38.267 Delete I/O Submission Queue (00h): Supported 00:44:38.267 Create I/O Submission Queue (01h): Supported 00:44:38.267 Get Log Page (02h): Supported 00:44:38.267 Delete I/O Completion Queue (04h): Supported 00:44:38.267 Create I/O Completion Queue (05h): Supported 00:44:38.267 Identify (06h): Supported 00:44:38.267 Abort (08h): Supported 00:44:38.267 Set Features (09h): Supported 00:44:38.267 Get Features (0Ah): Supported 00:44:38.267 Asynchronous Event Request (0Ch): Supported 00:44:38.267 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:38.267 Directive Send (19h): Supported 00:44:38.267 Directive Receive (1Ah): Supported 00:44:38.267 Virtualization Management (1Ch): Supported 00:44:38.267 Doorbell Buffer Config (7Ch): Supported 00:44:38.267 Format NVM (80h): Supported LBA-Change 00:44:38.267 I/O Commands 00:44:38.267 ------------ 00:44:38.267 Flush (00h): Supported LBA-Change 00:44:38.267 Write (01h): Supported LBA-Change 00:44:38.267 Read (02h): Supported 00:44:38.267 Compare (05h): Supported 00:44:38.267 Write Zeroes (08h): Supported LBA-Change 00:44:38.267 Dataset Management (09h): Supported LBA-Change 00:44:38.267 Unknown (0Ch): Supported 00:44:38.267 Unknown (12h): Supported 00:44:38.267 Copy (19h): Supported LBA-Change 00:44:38.267 Unknown (1Dh): Supported LBA-Change 00:44:38.267 00:44:38.267 Error Log 00:44:38.267 ========= 00:44:38.267 00:44:38.267 Arbitration 00:44:38.267 =========== 00:44:38.267 Arbitration Burst: no limit 00:44:38.267 00:44:38.267 Power Management 00:44:38.267 ================ 00:44:38.267 Number of Power States: 1 00:44:38.267 Current Power State: Power State #0 00:44:38.267 Power State #0: 00:44:38.267 Max Power: 25.00 W 00:44:38.267 Non-Operational State: Operational 00:44:38.267 Entry Latency: 16 microseconds 00:44:38.267 Exit Latency: 4 microseconds 00:44:38.267 Relative Read Throughput: 0 00:44:38.267 Relative Read Latency: 0 00:44:38.267 Relative Write Throughput: 0 00:44:38.267 Relative Write Latency: 0 00:44:38.267 Idle Power: Not Reported 00:44:38.268 Active Power: Not Reported 00:44:38.268 Non-Operational Permissive Mode: Not Supported 00:44:38.268 00:44:38.268 Health Information 00:44:38.268 ================== 00:44:38.268 Critical Warnings: 00:44:38.268 Available Spare Space: OK 00:44:38.268 Temperature: OK 00:44:38.268 Device Reliability: OK 00:44:38.268 Read Only: No 00:44:38.268 Volatile Memory Backup: OK 00:44:38.268 Current Temperature: 323 Kelvin (50 Celsius) 00:44:38.268 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:38.268 Available Spare: 0% 00:44:38.268 Available Spare Threshold: 0% 00:44:38.268 Life Percentage Used: 0% 00:44:38.268 Data Units Read: 4570 00:44:38.268 Data Units Written: 4240 00:44:38.268 Host Read Commands: 238808 00:44:38.268 Host Write Commands: 251967 00:44:38.268 Controller Busy Time: 0 minutes 00:44:38.268 Power Cycles: 0 00:44:38.268 Power On Hours: 0 hours 00:44:38.268 Unsafe Shutdowns: 0 00:44:38.268 Unrecoverable Media Errors: 0 00:44:38.268 Lifetime Error Log Entries: 0 00:44:38.268 Warning Temperature Time: 0 minutes 00:44:38.268 Critical Temperature Time: 0 minutes 00:44:38.268 00:44:38.268 Number of Queues 00:44:38.268 ================ 00:44:38.268 Number of I/O Submission Queues: 64 00:44:38.268 Number of I/O Completion Queues: 64 00:44:38.268 00:44:38.268 ZNS Specific Controller Data 00:44:38.268 ============================ 00:44:38.268 Zone Append Size Limit: 0 00:44:38.268 00:44:38.268 00:44:38.268 Active Namespaces 00:44:38.268 ================= 00:44:38.268 Namespace ID:1 00:44:38.268 Error Recovery Timeout: Unlimited 00:44:38.268 Command Set Identifier: NVM (00h) 00:44:38.268 Deallocate: Supported 00:44:38.268 Deallocated/Unwritten Error: Supported 00:44:38.268 Deallocated Read Value: All 0x00 00:44:38.268 Deallocate in Write Zeroes: Not Supported 00:44:38.268 Deallocated Guard Field: 0xFFFF 00:44:38.268 Flush: Supported 00:44:38.268 Reservation: Not Supported 00:44:38.268 Namespace Sharing Capabilities: Private 00:44:38.268 Size (in LBAs): 1310720 (5GiB) 00:44:38.268 Capacity (in LBAs): 1310720 (5GiB) 00:44:38.268 Utilization (in LBAs): 1310720 (5GiB) 00:44:38.268 Thin Provisioning: Not Supported 00:44:38.268 Per-NS Atomic Units: No 00:44:38.268 Maximum Single Source Range Length: 128 00:44:38.268 Maximum Copy Length: 128 00:44:38.268 Maximum Source Range Count: 128 00:44:38.268 NGUID/EUI64 Never Reused: No 00:44:38.268 Namespace Write Protected: No 00:44:38.268 Number of LBA Formats: 8 00:44:38.268 Current LBA Format: LBA Format #04 00:44:38.268 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:38.268 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:38.268 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:38.268 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:38.268 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:38.268 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:38.268 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:38.268 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:38.268 00:44:38.268 NVM Specific Namespace Data 00:44:38.268 =========================== 00:44:38.268 Logical Block Storage Tag Mask: 0 00:44:38.268 Protection Information Capabilities: 00:44:38.268 16b Guard Protection Information Storage Tag Support: No 00:44:38.268 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:38.268 Storage Tag Check Read Support: No 00:44:38.268 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.268 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.268 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.268 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.268 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.268 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.268 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.268 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.527 01:11:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:44:38.527 01:11:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:44:38.786 ===================================================== 00:44:38.786 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:38.786 ===================================================== 00:44:38.786 Controller Capabilities/Features 00:44:38.786 ================================ 00:44:38.786 Vendor ID: 1b36 00:44:38.786 Subsystem Vendor ID: 1af4 00:44:38.786 Serial Number: 12340 00:44:38.786 Model Number: QEMU NVMe Ctrl 00:44:38.786 Firmware Version: 8.0.0 00:44:38.786 Recommended Arb Burst: 6 00:44:38.786 IEEE OUI Identifier: 00 54 52 00:44:38.786 Multi-path I/O 00:44:38.786 May have multiple subsystem ports: No 00:44:38.786 May have multiple controllers: No 00:44:38.786 Associated with SR-IOV VF: No 00:44:38.786 Max Data Transfer Size: 524288 00:44:38.786 Max Number of Namespaces: 256 00:44:38.786 Max Number of I/O Queues: 64 00:44:38.786 NVMe Specification Version (VS): 1.4 00:44:38.786 NVMe Specification Version (Identify): 1.4 00:44:38.786 Maximum Queue Entries: 2048 00:44:38.786 Contiguous Queues Required: Yes 00:44:38.786 Arbitration Mechanisms Supported 00:44:38.786 Weighted Round Robin: Not Supported 00:44:38.786 Vendor Specific: Not Supported 00:44:38.786 Reset Timeout: 7500 ms 00:44:38.786 Doorbell Stride: 4 bytes 00:44:38.786 NVM Subsystem Reset: Not Supported 00:44:38.786 Command Sets Supported 00:44:38.786 NVM Command Set: Supported 00:44:38.786 Boot Partition: Not Supported 00:44:38.786 Memory Page Size Minimum: 4096 bytes 00:44:38.786 Memory Page Size Maximum: 65536 bytes 00:44:38.786 Persistent Memory Region: Not Supported 00:44:38.786 Optional Asynchronous Events Supported 00:44:38.786 Namespace Attribute Notices: Supported 00:44:38.786 Firmware Activation Notices: Not Supported 00:44:38.786 ANA Change Notices: Not Supported 00:44:38.786 PLE Aggregate Log Change Notices: Not Supported 00:44:38.786 LBA Status Info Alert Notices: Not Supported 00:44:38.786 EGE Aggregate Log Change Notices: Not Supported 00:44:38.786 Normal NVM Subsystem Shutdown event: Not Supported 00:44:38.786 Zone Descriptor Change Notices: Not Supported 00:44:38.786 Discovery Log Change Notices: Not Supported 00:44:38.786 Controller Attributes 00:44:38.786 128-bit Host Identifier: Not Supported 00:44:38.786 Non-Operational Permissive Mode: Not Supported 00:44:38.786 NVM Sets: Not Supported 00:44:38.786 Read Recovery Levels: Not Supported 00:44:38.786 Endurance Groups: Not Supported 00:44:38.786 Predictable Latency Mode: Not Supported 00:44:38.786 Traffic Based Keep ALive: Not Supported 00:44:38.786 Namespace Granularity: Not Supported 00:44:38.786 SQ Associations: Not Supported 00:44:38.786 UUID List: Not Supported 00:44:38.786 Multi-Domain Subsystem: Not Supported 00:44:38.786 Fixed Capacity Management: Not Supported 00:44:38.786 Variable Capacity Management: Not Supported 00:44:38.786 Delete Endurance Group: Not Supported 00:44:38.786 Delete NVM Set: Not Supported 00:44:38.786 Extended LBA Formats Supported: Supported 00:44:38.786 Flexible Data Placement Supported: Not Supported 00:44:38.786 00:44:38.786 Controller Memory Buffer Support 00:44:38.786 ================================ 00:44:38.786 Supported: No 00:44:38.786 00:44:38.786 Persistent Memory Region Support 00:44:38.786 ================================ 00:44:38.786 Supported: No 00:44:38.786 00:44:38.786 Admin Command Set Attributes 00:44:38.786 ============================ 00:44:38.786 Security Send/Receive: Not Supported 00:44:38.786 Format NVM: Supported 00:44:38.786 Firmware Activate/Download: Not Supported 00:44:38.786 Namespace Management: Supported 00:44:38.786 Device Self-Test: Not Supported 00:44:38.786 Directives: Supported 00:44:38.786 NVMe-MI: Not Supported 00:44:38.786 Virtualization Management: Not Supported 00:44:38.786 Doorbell Buffer Config: Supported 00:44:38.786 Get LBA Status Capability: Not Supported 00:44:38.786 Command & Feature Lockdown Capability: Not Supported 00:44:38.786 Abort Command Limit: 4 00:44:38.786 Async Event Request Limit: 4 00:44:38.786 Number of Firmware Slots: N/A 00:44:38.786 Firmware Slot 1 Read-Only: N/A 00:44:38.786 Firmware Activation Without Reset: N/A 00:44:38.786 Multiple Update Detection Support: N/A 00:44:38.786 Firmware Update Granularity: No Information Provided 00:44:38.786 Per-Namespace SMART Log: Yes 00:44:38.786 Asymmetric Namespace Access Log Page: Not Supported 00:44:38.786 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:44:38.786 Command Effects Log Page: Supported 00:44:38.786 Get Log Page Extended Data: Supported 00:44:38.786 Telemetry Log Pages: Not Supported 00:44:38.786 Persistent Event Log Pages: Not Supported 00:44:38.786 Supported Log Pages Log Page: May Support 00:44:38.786 Commands Supported & Effects Log Page: Not Supported 00:44:38.786 Feature Identifiers & Effects Log Page:May Support 00:44:38.786 NVMe-MI Commands & Effects Log Page: May Support 00:44:38.786 Data Area 4 for Telemetry Log: Not Supported 00:44:38.786 Error Log Page Entries Supported: 1 00:44:38.786 Keep Alive: Not Supported 00:44:38.786 00:44:38.786 NVM Command Set Attributes 00:44:38.786 ========================== 00:44:38.786 Submission Queue Entry Size 00:44:38.786 Max: 64 00:44:38.786 Min: 64 00:44:38.786 Completion Queue Entry Size 00:44:38.786 Max: 16 00:44:38.786 Min: 16 00:44:38.786 Number of Namespaces: 256 00:44:38.786 Compare Command: Supported 00:44:38.786 Write Uncorrectable Command: Not Supported 00:44:38.786 Dataset Management Command: Supported 00:44:38.786 Write Zeroes Command: Supported 00:44:38.786 Set Features Save Field: Supported 00:44:38.786 Reservations: Not Supported 00:44:38.786 Timestamp: Supported 00:44:38.786 Copy: Supported 00:44:38.786 Volatile Write Cache: Present 00:44:38.786 Atomic Write Unit (Normal): 1 00:44:38.786 Atomic Write Unit (PFail): 1 00:44:38.786 Atomic Compare & Write Unit: 1 00:44:38.786 Fused Compare & Write: Not Supported 00:44:38.786 Scatter-Gather List 00:44:38.786 SGL Command Set: Supported 00:44:38.786 SGL Keyed: Not Supported 00:44:38.786 SGL Bit Bucket Descriptor: Not Supported 00:44:38.786 SGL Metadata Pointer: Not Supported 00:44:38.786 Oversized SGL: Not Supported 00:44:38.786 SGL Metadata Address: Not Supported 00:44:38.786 SGL Offset: Not Supported 00:44:38.786 Transport SGL Data Block: Not Supported 00:44:38.786 Replay Protected Memory Block: Not Supported 00:44:38.786 00:44:38.786 Firmware Slot Information 00:44:38.786 ========================= 00:44:38.786 Active slot: 1 00:44:38.786 Slot 1 Firmware Revision: 1.0 00:44:38.786 00:44:38.786 00:44:38.786 Commands Supported and Effects 00:44:38.786 ============================== 00:44:38.786 Admin Commands 00:44:38.786 -------------- 00:44:38.786 Delete I/O Submission Queue (00h): Supported 00:44:38.786 Create I/O Submission Queue (01h): Supported 00:44:38.786 Get Log Page (02h): Supported 00:44:38.786 Delete I/O Completion Queue (04h): Supported 00:44:38.786 Create I/O Completion Queue (05h): Supported 00:44:38.786 Identify (06h): Supported 00:44:38.786 Abort (08h): Supported 00:44:38.786 Set Features (09h): Supported 00:44:38.786 Get Features (0Ah): Supported 00:44:38.786 Asynchronous Event Request (0Ch): Supported 00:44:38.786 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:38.786 Directive Send (19h): Supported 00:44:38.786 Directive Receive (1Ah): Supported 00:44:38.786 Virtualization Management (1Ch): Supported 00:44:38.786 Doorbell Buffer Config (7Ch): Supported 00:44:38.786 Format NVM (80h): Supported LBA-Change 00:44:38.786 I/O Commands 00:44:38.786 ------------ 00:44:38.786 Flush (00h): Supported LBA-Change 00:44:38.786 Write (01h): Supported LBA-Change 00:44:38.786 Read (02h): Supported 00:44:38.786 Compare (05h): Supported 00:44:38.786 Write Zeroes (08h): Supported LBA-Change 00:44:38.786 Dataset Management (09h): Supported LBA-Change 00:44:38.786 Unknown (0Ch): Supported 00:44:38.786 Unknown (12h): Supported 00:44:38.786 Copy (19h): Supported LBA-Change 00:44:38.786 Unknown (1Dh): Supported LBA-Change 00:44:38.786 00:44:38.786 Error Log 00:44:38.786 ========= 00:44:38.786 00:44:38.786 Arbitration 00:44:38.786 =========== 00:44:38.786 Arbitration Burst: no limit 00:44:38.786 00:44:38.786 Power Management 00:44:38.786 ================ 00:44:38.786 Number of Power States: 1 00:44:38.786 Current Power State: Power State #0 00:44:38.786 Power State #0: 00:44:38.786 Max Power: 25.00 W 00:44:38.786 Non-Operational State: Operational 00:44:38.786 Entry Latency: 16 microseconds 00:44:38.786 Exit Latency: 4 microseconds 00:44:38.786 Relative Read Throughput: 0 00:44:38.786 Relative Read Latency: 0 00:44:38.786 Relative Write Throughput: 0 00:44:38.786 Relative Write Latency: 0 00:44:38.786 Idle Power: Not Reported 00:44:38.786 Active Power: Not Reported 00:44:38.786 Non-Operational Permissive Mode: Not Supported 00:44:38.786 00:44:38.786 Health Information 00:44:38.786 ================== 00:44:38.786 Critical Warnings: 00:44:38.786 Available Spare Space: OK 00:44:38.786 Temperature: OK 00:44:38.786 Device Reliability: OK 00:44:38.786 Read Only: No 00:44:38.786 Volatile Memory Backup: OK 00:44:38.786 Current Temperature: 323 Kelvin (50 Celsius) 00:44:38.786 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:38.786 Available Spare: 0% 00:44:38.786 Available Spare Threshold: 0% 00:44:38.786 Life Percentage Used: 0% 00:44:38.786 Data Units Read: 4570 00:44:38.786 Data Units Written: 4240 00:44:38.786 Host Read Commands: 238808 00:44:38.786 Host Write Commands: 251967 00:44:38.786 Controller Busy Time: 0 minutes 00:44:38.786 Power Cycles: 0 00:44:38.786 Power On Hours: 0 hours 00:44:38.786 Unsafe Shutdowns: 0 00:44:38.786 Unrecoverable Media Errors: 0 00:44:38.786 Lifetime Error Log Entries: 0 00:44:38.786 Warning Temperature Time: 0 minutes 00:44:38.786 Critical Temperature Time: 0 minutes 00:44:38.786 00:44:38.786 Number of Queues 00:44:38.786 ================ 00:44:38.786 Number of I/O Submission Queues: 64 00:44:38.786 Number of I/O Completion Queues: 64 00:44:38.786 00:44:38.786 ZNS Specific Controller Data 00:44:38.786 ============================ 00:44:38.786 Zone Append Size Limit: 0 00:44:38.786 00:44:38.786 00:44:38.786 Active Namespaces 00:44:38.786 ================= 00:44:38.786 Namespace ID:1 00:44:38.786 Error Recovery Timeout: Unlimited 00:44:38.786 Command Set Identifier: NVM (00h) 00:44:38.786 Deallocate: Supported 00:44:38.786 Deallocated/Unwritten Error: Supported 00:44:38.786 Deallocated Read Value: All 0x00 00:44:38.786 Deallocate in Write Zeroes: Not Supported 00:44:38.786 Deallocated Guard Field: 0xFFFF 00:44:38.786 Flush: Supported 00:44:38.786 Reservation: Not Supported 00:44:38.786 Namespace Sharing Capabilities: Private 00:44:38.786 Size (in LBAs): 1310720 (5GiB) 00:44:38.786 Capacity (in LBAs): 1310720 (5GiB) 00:44:38.786 Utilization (in LBAs): 1310720 (5GiB) 00:44:38.786 Thin Provisioning: Not Supported 00:44:38.786 Per-NS Atomic Units: No 00:44:38.786 Maximum Single Source Range Length: 128 00:44:38.786 Maximum Copy Length: 128 00:44:38.786 Maximum Source Range Count: 128 00:44:38.787 NGUID/EUI64 Never Reused: No 00:44:38.787 Namespace Write Protected: No 00:44:38.787 Number of LBA Formats: 8 00:44:38.787 Current LBA Format: LBA Format #04 00:44:38.787 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:38.787 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:38.787 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:38.787 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:38.787 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:38.787 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:38.787 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:38.787 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:38.787 00:44:38.787 NVM Specific Namespace Data 00:44:38.787 =========================== 00:44:38.787 Logical Block Storage Tag Mask: 0 00:44:38.787 Protection Information Capabilities: 00:44:38.787 16b Guard Protection Information Storage Tag Support: No 00:44:38.787 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:38.787 Storage Tag Check Read Support: No 00:44:38.787 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.787 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.787 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.787 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.787 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.787 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.787 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.787 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:38.787 00:44:38.787 real 0m0.842s 00:44:38.787 user 0m0.342s 00:44:38.787 sys 0m0.370s 00:44:38.787 01:11:01 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:38.787 01:11:01 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:44:38.787 ************************************ 00:44:38.787 END TEST nvme_identify 00:44:38.787 ************************************ 00:44:38.787 01:11:01 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:44:38.787 01:11:01 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:38.787 01:11:01 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:38.787 01:11:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:38.787 ************************************ 00:44:38.787 START TEST nvme_perf 00:44:38.787 ************************************ 00:44:38.787 01:11:01 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:44:38.787 01:11:01 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:44:40.165 Initializing NVMe Controllers 00:44:40.165 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:40.165 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:44:40.165 Initialization complete. Launching workers. 00:44:40.165 ======================================================== 00:44:40.165 Latency(us) 00:44:40.165 Device Information : IOPS MiB/s Average min max 00:44:40.165 PCIE (0000:00:10.0) NSID 1 from core 0: 85888.00 1006.50 1488.91 701.96 8163.03 00:44:40.165 ======================================================== 00:44:40.165 Total : 85888.00 1006.50 1488.91 701.96 8163.03 00:44:40.165 00:44:40.165 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:40.165 ================================================================================= 00:44:40.165 1.00000% : 838.705us 00:44:40.165 10.00000% : 990.842us 00:44:40.166 25.00000% : 1170.286us 00:44:40.166 50.00000% : 1458.956us 00:44:40.166 75.00000% : 1739.825us 00:44:40.166 90.00000% : 1942.674us 00:44:40.166 95.00000% : 2184.533us 00:44:40.166 98.00000% : 2605.836us 00:44:40.166 99.00000% : 2855.497us 00:44:40.166 99.50000% : 3229.989us 00:44:40.166 99.90000% : 4837.181us 00:44:40.166 99.99000% : 7801.905us 00:44:40.166 99.99900% : 8176.396us 00:44:40.166 99.99990% : 8176.396us 00:44:40.166 99.99999% : 8176.396us 00:44:40.166 00:44:40.166 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:40.166 ============================================================================== 00:44:40.166 Range in us Cumulative IO count 00:44:40.166 698.270 - 702.171: 0.0012% ( 1) 00:44:40.166 717.775 - 721.676: 0.0023% ( 1) 00:44:40.166 721.676 - 725.577: 0.0035% ( 1) 00:44:40.166 729.478 - 733.379: 0.0070% ( 3) 00:44:40.166 733.379 - 737.280: 0.0105% ( 3) 00:44:40.166 737.280 - 741.181: 0.0116% ( 1) 00:44:40.166 741.181 - 745.082: 0.0128% ( 1) 00:44:40.166 745.082 - 748.983: 0.0163% ( 3) 00:44:40.166 748.983 - 752.884: 0.0291% ( 11) 00:44:40.166 752.884 - 756.785: 0.0349% ( 5) 00:44:40.166 756.785 - 760.686: 0.0373% ( 2) 00:44:40.166 760.686 - 764.587: 0.0408% ( 3) 00:44:40.166 764.587 - 768.488: 0.0466% ( 5) 00:44:40.166 768.488 - 772.389: 0.0524% ( 5) 00:44:40.166 772.389 - 776.290: 0.0594% ( 6) 00:44:40.166 776.290 - 780.190: 0.0827% ( 20) 00:44:40.166 780.190 - 784.091: 0.1048% ( 19) 00:44:40.166 784.091 - 787.992: 0.1269% ( 19) 00:44:40.166 787.992 - 791.893: 0.1467% ( 17) 00:44:40.166 791.893 - 795.794: 0.1875% ( 35) 00:44:40.166 795.794 - 799.695: 0.2224% ( 30) 00:44:40.166 799.695 - 803.596: 0.2771% ( 47) 00:44:40.166 803.596 - 807.497: 0.3342% ( 49) 00:44:40.166 807.497 - 811.398: 0.3994% ( 56) 00:44:40.166 811.398 - 815.299: 0.4820% ( 71) 00:44:40.166 815.299 - 819.200: 0.5868% ( 90) 00:44:40.166 819.200 - 823.101: 0.6695% ( 71) 00:44:40.166 823.101 - 827.002: 0.7580% ( 76) 00:44:40.166 827.002 - 830.903: 0.8697% ( 96) 00:44:40.166 830.903 - 834.804: 0.9932% ( 106) 00:44:40.166 834.804 - 838.705: 1.1270% ( 115) 00:44:40.166 838.705 - 842.606: 1.2470% ( 103) 00:44:40.166 842.606 - 846.507: 1.4018% ( 133) 00:44:40.166 846.507 - 850.408: 1.5695% ( 144) 00:44:40.166 850.408 - 854.309: 1.7092% ( 120) 00:44:40.166 854.309 - 858.210: 1.8920% ( 157) 00:44:40.166 858.210 - 862.110: 2.0911% ( 171) 00:44:40.166 862.110 - 866.011: 2.2902% ( 171) 00:44:40.166 866.011 - 869.912: 2.4823% ( 165) 00:44:40.166 869.912 - 873.813: 2.7024% ( 189) 00:44:40.166 873.813 - 877.714: 2.9352% ( 200) 00:44:40.166 877.714 - 881.615: 3.1541% ( 188) 00:44:40.166 881.615 - 885.516: 3.3369% ( 157) 00:44:40.166 885.516 - 889.417: 3.5779% ( 207) 00:44:40.166 889.417 - 893.318: 3.7782% ( 172) 00:44:40.166 893.318 - 897.219: 4.0204% ( 208) 00:44:40.166 897.219 - 901.120: 4.2986% ( 239) 00:44:40.166 901.120 - 905.021: 4.5280% ( 197) 00:44:40.166 905.021 - 908.922: 4.7597% ( 199) 00:44:40.166 908.922 - 912.823: 5.0112% ( 216) 00:44:40.166 912.823 - 916.724: 5.2359% ( 193) 00:44:40.166 916.724 - 920.625: 5.4629% ( 195) 00:44:40.166 920.625 - 924.526: 5.7028% ( 206) 00:44:40.166 924.526 - 928.427: 5.9496% ( 212) 00:44:40.166 928.427 - 932.328: 6.1825% ( 200) 00:44:40.166 932.328 - 936.229: 6.4118% ( 197) 00:44:40.166 936.229 - 940.130: 6.6878% ( 237) 00:44:40.166 940.130 - 944.030: 6.8974% ( 180) 00:44:40.166 944.030 - 947.931: 7.1686% ( 233) 00:44:40.166 947.931 - 951.832: 7.3992% ( 198) 00:44:40.166 951.832 - 955.733: 7.6635% ( 227) 00:44:40.166 955.733 - 959.634: 7.8998% ( 203) 00:44:40.166 959.634 - 963.535: 8.1862% ( 246) 00:44:40.166 963.535 - 967.436: 8.4051% ( 188) 00:44:40.166 967.436 - 971.337: 8.7207% ( 271) 00:44:40.166 971.337 - 975.238: 8.9838% ( 226) 00:44:40.166 975.238 - 979.139: 9.2702% ( 246) 00:44:40.166 979.139 - 983.040: 9.5287% ( 222) 00:44:40.166 983.040 - 986.941: 9.8302% ( 259) 00:44:40.166 986.941 - 990.842: 10.1341% ( 261) 00:44:40.166 990.842 - 994.743: 10.4066% ( 234) 00:44:40.166 994.743 - 998.644: 10.7011% ( 253) 00:44:40.166 998.644 - 1006.446: 11.2996% ( 514) 00:44:40.166 1006.446 - 1014.248: 11.9074% ( 522) 00:44:40.166 1014.248 - 1022.050: 12.5245% ( 530) 00:44:40.166 1022.050 - 1029.851: 13.1462% ( 534) 00:44:40.166 1029.851 - 1037.653: 13.7796% ( 544) 00:44:40.166 1037.653 - 1045.455: 14.4164% ( 547) 00:44:40.166 1045.455 - 1053.257: 15.0161% ( 515) 00:44:40.166 1053.257 - 1061.059: 15.6378% ( 534) 00:44:40.166 1061.059 - 1068.861: 16.3038% ( 572) 00:44:40.166 1068.861 - 1076.663: 16.9348% ( 542) 00:44:40.166 1076.663 - 1084.465: 17.6218% ( 590) 00:44:40.166 1084.465 - 1092.267: 18.2878% ( 572) 00:44:40.166 1092.267 - 1100.069: 18.9305% ( 552) 00:44:40.166 1100.069 - 1107.870: 19.6116% ( 585) 00:44:40.166 1107.870 - 1115.672: 20.2426% ( 542) 00:44:40.166 1115.672 - 1123.474: 20.9412% ( 600) 00:44:40.166 1123.474 - 1131.276: 21.6119% ( 576) 00:44:40.166 1131.276 - 1139.078: 22.3291% ( 616) 00:44:40.166 1139.078 - 1146.880: 22.9939% ( 571) 00:44:40.166 1146.880 - 1154.682: 23.6878% ( 596) 00:44:40.166 1154.682 - 1162.484: 24.3620% ( 579) 00:44:40.166 1162.484 - 1170.286: 25.0757% ( 613) 00:44:40.166 1170.286 - 1178.088: 25.7463% ( 576) 00:44:40.166 1178.088 - 1185.890: 26.4705% ( 622) 00:44:40.166 1185.890 - 1193.691: 27.1377% ( 573) 00:44:40.166 1193.691 - 1201.493: 27.8432% ( 606) 00:44:40.166 1201.493 - 1209.295: 28.5115% ( 574) 00:44:40.166 1209.295 - 1217.097: 29.2264% ( 614) 00:44:40.166 1217.097 - 1224.899: 29.8866% ( 567) 00:44:40.166 1224.899 - 1232.701: 30.6155% ( 626) 00:44:40.166 1232.701 - 1240.503: 31.2756% ( 567) 00:44:40.166 1240.503 - 1248.305: 32.0045% ( 626) 00:44:40.166 1248.305 - 1256.107: 32.6856% ( 585) 00:44:40.166 1256.107 - 1263.909: 33.3888% ( 604) 00:44:40.166 1263.909 - 1271.710: 34.0630% ( 579) 00:44:40.166 1271.710 - 1279.512: 34.7930% ( 627) 00:44:40.166 1279.512 - 1287.314: 35.4636% ( 576) 00:44:40.166 1287.314 - 1295.116: 36.1541% ( 593) 00:44:40.166 1295.116 - 1302.918: 36.8480% ( 596) 00:44:40.166 1302.918 - 1310.720: 37.5140% ( 572) 00:44:40.166 1310.720 - 1318.522: 38.2358% ( 620) 00:44:40.166 1318.522 - 1326.324: 38.8890% ( 561) 00:44:40.166 1326.324 - 1334.126: 39.6190% ( 627) 00:44:40.166 1334.126 - 1341.928: 40.2676% ( 557) 00:44:40.166 1341.928 - 1349.730: 40.9766% ( 609) 00:44:40.166 1349.730 - 1357.531: 41.6554% ( 583) 00:44:40.166 1357.531 - 1365.333: 42.3435% ( 591) 00:44:40.166 1365.333 - 1373.135: 43.0281% ( 588) 00:44:40.166 1373.135 - 1380.937: 43.7197% ( 594) 00:44:40.166 1380.937 - 1388.739: 44.3962% ( 581) 00:44:40.166 1388.739 - 1396.541: 45.1053% ( 609) 00:44:40.166 1396.541 - 1404.343: 45.7584% ( 561) 00:44:40.166 1404.343 - 1412.145: 46.4687% ( 610) 00:44:40.166 1412.145 - 1419.947: 47.1300% ( 568) 00:44:40.166 1419.947 - 1427.749: 47.8507% ( 619) 00:44:40.166 1427.749 - 1435.550: 48.5202% ( 575) 00:44:40.166 1435.550 - 1443.352: 49.2246% ( 605) 00:44:40.166 1443.352 - 1451.154: 49.9010% ( 581) 00:44:40.166 1451.154 - 1458.956: 50.6031% ( 603) 00:44:40.166 1458.956 - 1466.758: 51.2877% ( 588) 00:44:40.166 1466.758 - 1474.560: 51.9793% ( 594) 00:44:40.166 1474.560 - 1482.362: 52.6500% ( 576) 00:44:40.166 1482.362 - 1490.164: 53.3183% ( 574) 00:44:40.166 1490.164 - 1497.966: 54.0297% ( 611) 00:44:40.166 1497.966 - 1505.768: 54.6991% ( 575) 00:44:40.166 1505.768 - 1513.570: 55.4047% ( 606) 00:44:40.166 1513.570 - 1521.371: 56.0858% ( 585) 00:44:40.166 1521.371 - 1529.173: 56.7786% ( 595) 00:44:40.166 1529.173 - 1536.975: 57.4527% ( 579) 00:44:40.166 1536.975 - 1544.777: 58.1478% ( 597) 00:44:40.166 1544.777 - 1552.579: 58.8301% ( 586) 00:44:40.166 1552.579 - 1560.381: 59.5240% ( 596) 00:44:40.166 1560.381 - 1568.183: 60.1947% ( 576) 00:44:40.166 1568.183 - 1575.985: 60.9049% ( 610) 00:44:40.166 1575.985 - 1583.787: 61.5977% ( 595) 00:44:40.166 1583.787 - 1591.589: 62.2718% ( 579) 00:44:40.166 1591.589 - 1599.390: 62.9494% ( 582) 00:44:40.166 1599.390 - 1607.192: 63.6538% ( 605) 00:44:40.166 1607.192 - 1614.994: 64.3349% ( 585) 00:44:40.166 1614.994 - 1622.796: 65.0405% ( 606) 00:44:40.166 1622.796 - 1630.598: 65.7333% ( 595) 00:44:40.166 1630.598 - 1638.400: 66.3923% ( 566) 00:44:40.166 1638.400 - 1646.202: 67.1141% ( 620) 00:44:40.166 1646.202 - 1654.004: 67.7697% ( 563) 00:44:40.166 1654.004 - 1661.806: 68.4962% ( 624) 00:44:40.166 1661.806 - 1669.608: 69.1400% ( 553) 00:44:40.166 1669.608 - 1677.410: 69.8584% ( 617) 00:44:40.166 1677.410 - 1685.211: 70.4965% ( 548) 00:44:40.166 1685.211 - 1693.013: 71.2335% ( 633) 00:44:40.166 1693.013 - 1700.815: 71.8948% ( 568) 00:44:40.166 1700.815 - 1708.617: 72.5876% ( 595) 00:44:40.166 1708.617 - 1716.419: 73.2722% ( 588) 00:44:40.166 1716.419 - 1724.221: 73.9463% ( 579) 00:44:40.167 1724.221 - 1732.023: 74.6484% ( 603) 00:44:40.167 1732.023 - 1739.825: 75.3411% ( 595) 00:44:40.167 1739.825 - 1747.627: 76.0211% ( 584) 00:44:40.167 1747.627 - 1755.429: 76.7197% ( 600) 00:44:40.167 1755.429 - 1763.230: 77.4136% ( 596) 00:44:40.167 1763.230 - 1771.032: 78.0633% ( 558) 00:44:40.167 1771.032 - 1778.834: 78.7712% ( 608) 00:44:40.167 1778.834 - 1786.636: 79.4255% ( 562) 00:44:40.167 1786.636 - 1794.438: 80.1264% ( 602) 00:44:40.167 1794.438 - 1802.240: 80.7831% ( 564) 00:44:40.167 1802.240 - 1810.042: 81.4805% ( 599) 00:44:40.167 1810.042 - 1817.844: 82.1407% ( 567) 00:44:40.167 1817.844 - 1825.646: 82.8195% ( 583) 00:44:40.167 1825.646 - 1833.448: 83.4540% ( 545) 00:44:40.167 1833.448 - 1841.250: 84.0991% ( 554) 00:44:40.167 1841.250 - 1849.051: 84.6975% ( 514) 00:44:40.167 1849.051 - 1856.853: 85.3123% ( 528) 00:44:40.167 1856.853 - 1864.655: 85.8816% ( 489) 00:44:40.167 1864.655 - 1872.457: 86.4032% ( 448) 00:44:40.167 1872.457 - 1880.259: 86.9027% ( 429) 00:44:40.167 1880.259 - 1888.061: 87.3882% ( 417) 00:44:40.167 1888.061 - 1895.863: 87.8609% ( 406) 00:44:40.167 1895.863 - 1903.665: 88.2917% ( 370) 00:44:40.167 1903.665 - 1911.467: 88.7155% ( 364) 00:44:40.167 1911.467 - 1919.269: 89.1033% ( 333) 00:44:40.167 1919.269 - 1927.070: 89.4840% ( 327) 00:44:40.167 1927.070 - 1934.872: 89.8344% ( 301) 00:44:40.167 1934.872 - 1942.674: 90.1977% ( 312) 00:44:40.167 1942.674 - 1950.476: 90.4993% ( 259) 00:44:40.167 1950.476 - 1958.278: 90.8218% ( 277) 00:44:40.167 1958.278 - 1966.080: 91.0931% ( 233) 00:44:40.167 1966.080 - 1973.882: 91.3760% ( 243) 00:44:40.167 1973.882 - 1981.684: 91.6461% ( 232) 00:44:40.167 1981.684 - 1989.486: 91.8801% ( 201) 00:44:40.167 1989.486 - 1997.288: 92.1374% ( 221) 00:44:40.167 1997.288 - 2012.891: 92.5787% ( 379) 00:44:40.167 2012.891 - 2028.495: 92.9583% ( 326) 00:44:40.167 2028.495 - 2044.099: 93.3041% ( 297) 00:44:40.167 2044.099 - 2059.703: 93.6033% ( 257) 00:44:40.167 2059.703 - 2075.307: 93.8699% ( 229) 00:44:40.167 2075.307 - 2090.910: 94.1098% ( 206) 00:44:40.167 2090.910 - 2106.514: 94.3205% ( 181) 00:44:40.167 2106.514 - 2122.118: 94.5080% ( 161) 00:44:40.167 2122.118 - 2137.722: 94.6698% ( 139) 00:44:40.167 2137.722 - 2153.326: 94.8212% ( 130) 00:44:40.167 2153.326 - 2168.930: 94.9714% ( 129) 00:44:40.167 2168.930 - 2184.533: 95.1111% ( 120) 00:44:40.167 2184.533 - 2200.137: 95.2380% ( 109) 00:44:40.167 2200.137 - 2215.741: 95.3602% ( 105) 00:44:40.167 2215.741 - 2231.345: 95.4743% ( 98) 00:44:40.167 2231.345 - 2246.949: 95.6001% ( 108) 00:44:40.167 2246.949 - 2262.552: 95.7177% ( 101) 00:44:40.167 2262.552 - 2278.156: 95.8295% ( 96) 00:44:40.167 2278.156 - 2293.760: 95.9377% ( 93) 00:44:40.167 2293.760 - 2309.364: 96.0507% ( 97) 00:44:40.167 2309.364 - 2324.968: 96.1624% ( 96) 00:44:40.167 2324.968 - 2340.571: 96.2742% ( 96) 00:44:40.167 2340.571 - 2356.175: 96.3755% ( 87) 00:44:40.167 2356.175 - 2371.779: 96.4815% ( 91) 00:44:40.167 2371.779 - 2387.383: 96.5944% ( 97) 00:44:40.167 2387.383 - 2402.987: 96.7050% ( 95) 00:44:40.167 2402.987 - 2418.590: 96.8051% ( 86) 00:44:40.167 2418.590 - 2434.194: 96.9099% ( 90) 00:44:40.167 2434.194 - 2449.798: 97.0182% ( 93) 00:44:40.167 2449.798 - 2465.402: 97.1300% ( 96) 00:44:40.167 2465.402 - 2481.006: 97.2394% ( 94) 00:44:40.167 2481.006 - 2496.610: 97.3326% ( 80) 00:44:40.167 2496.610 - 2512.213: 97.4374% ( 90) 00:44:40.167 2512.213 - 2527.817: 97.5421% ( 90) 00:44:40.167 2527.817 - 2543.421: 97.6481% ( 91) 00:44:40.167 2543.421 - 2559.025: 97.7436% ( 82) 00:44:40.167 2559.025 - 2574.629: 97.8437% ( 86) 00:44:40.167 2574.629 - 2590.232: 97.9403% ( 83) 00:44:40.167 2590.232 - 2605.836: 98.0475% ( 92) 00:44:40.167 2605.836 - 2621.440: 98.1359% ( 76) 00:44:40.167 2621.440 - 2637.044: 98.2337% ( 84) 00:44:40.167 2637.044 - 2652.648: 98.3269% ( 80) 00:44:40.167 2652.648 - 2668.251: 98.4154% ( 76) 00:44:40.167 2668.251 - 2683.855: 98.4934% ( 67) 00:44:40.167 2683.855 - 2699.459: 98.5609% ( 58) 00:44:40.167 2699.459 - 2715.063: 98.6308% ( 60) 00:44:40.167 2715.063 - 2730.667: 98.6913% ( 52) 00:44:40.167 2730.667 - 2746.270: 98.7449% ( 46) 00:44:40.167 2746.270 - 2761.874: 98.7926% ( 41) 00:44:40.167 2761.874 - 2777.478: 98.8392% ( 40) 00:44:40.167 2777.478 - 2793.082: 98.8799% ( 35) 00:44:40.167 2793.082 - 2808.686: 98.9160% ( 31) 00:44:40.167 2808.686 - 2824.290: 98.9498% ( 29) 00:44:40.167 2824.290 - 2839.893: 98.9801% ( 26) 00:44:40.167 2839.893 - 2855.497: 99.0127% ( 28) 00:44:40.167 2855.497 - 2871.101: 99.0383% ( 22) 00:44:40.167 2871.101 - 2886.705: 99.0627% ( 21) 00:44:40.167 2886.705 - 2902.309: 99.0907% ( 24) 00:44:40.167 2902.309 - 2917.912: 99.1151% ( 21) 00:44:40.167 2917.912 - 2933.516: 99.1396% ( 21) 00:44:40.167 2933.516 - 2949.120: 99.1582% ( 16) 00:44:40.167 2949.120 - 2964.724: 99.1827% ( 21) 00:44:40.167 2964.724 - 2980.328: 99.2024% ( 17) 00:44:40.167 2980.328 - 2995.931: 99.2222% ( 17) 00:44:40.167 2995.931 - 3011.535: 99.2409% ( 16) 00:44:40.167 3011.535 - 3027.139: 99.2595% ( 16) 00:44:40.167 3027.139 - 3042.743: 99.2793% ( 17) 00:44:40.167 3042.743 - 3058.347: 99.2991% ( 17) 00:44:40.167 3058.347 - 3073.950: 99.3177% ( 16) 00:44:40.167 3073.950 - 3089.554: 99.3340% ( 14) 00:44:40.167 3089.554 - 3105.158: 99.3538% ( 17) 00:44:40.167 3105.158 - 3120.762: 99.3736% ( 17) 00:44:40.167 3120.762 - 3136.366: 99.3887% ( 13) 00:44:40.167 3136.366 - 3151.970: 99.4074% ( 16) 00:44:40.167 3151.970 - 3167.573: 99.4272% ( 17) 00:44:40.167 3167.573 - 3183.177: 99.4435% ( 14) 00:44:40.167 3183.177 - 3198.781: 99.4644% ( 18) 00:44:40.167 3198.781 - 3214.385: 99.4842% ( 17) 00:44:40.167 3214.385 - 3229.989: 99.5028% ( 16) 00:44:40.167 3229.989 - 3245.592: 99.5191% ( 14) 00:44:40.167 3245.592 - 3261.196: 99.5401% ( 18) 00:44:40.167 3261.196 - 3276.800: 99.5576% ( 15) 00:44:40.167 3276.800 - 3292.404: 99.5774% ( 17) 00:44:40.167 3292.404 - 3308.008: 99.5913% ( 12) 00:44:40.167 3308.008 - 3323.611: 99.6041% ( 11) 00:44:40.167 3323.611 - 3339.215: 99.6169% ( 11) 00:44:40.167 3339.215 - 3354.819: 99.6263% ( 8) 00:44:40.167 3354.819 - 3370.423: 99.6356% ( 8) 00:44:40.167 3370.423 - 3386.027: 99.6461% ( 9) 00:44:40.167 3386.027 - 3401.630: 99.6542% ( 7) 00:44:40.167 3401.630 - 3417.234: 99.6600% ( 5) 00:44:40.167 3417.234 - 3432.838: 99.6658% ( 5) 00:44:40.167 3432.838 - 3448.442: 99.6717% ( 5) 00:44:40.167 3448.442 - 3464.046: 99.6775% ( 5) 00:44:40.167 3464.046 - 3479.650: 99.6845% ( 6) 00:44:40.167 3479.650 - 3495.253: 99.6903% ( 5) 00:44:40.167 3495.253 - 3510.857: 99.6950% ( 4) 00:44:40.167 3510.857 - 3526.461: 99.7008% ( 5) 00:44:40.167 3526.461 - 3542.065: 99.7054% ( 4) 00:44:40.167 3542.065 - 3557.669: 99.7124% ( 6) 00:44:40.167 3557.669 - 3573.272: 99.7159% ( 3) 00:44:40.167 3573.272 - 3588.876: 99.7206% ( 4) 00:44:40.167 3588.876 - 3604.480: 99.7264% ( 5) 00:44:40.167 3604.480 - 3620.084: 99.7299% ( 3) 00:44:40.167 3620.084 - 3635.688: 99.7345% ( 4) 00:44:40.167 3635.688 - 3651.291: 99.7392% ( 4) 00:44:40.167 3651.291 - 3666.895: 99.7427% ( 3) 00:44:40.167 3666.895 - 3682.499: 99.7485% ( 5) 00:44:40.167 3682.499 - 3698.103: 99.7520% ( 3) 00:44:40.167 3698.103 - 3713.707: 99.7567% ( 4) 00:44:40.167 3713.707 - 3729.310: 99.7602% ( 3) 00:44:40.167 3729.310 - 3744.914: 99.7648% ( 4) 00:44:40.167 3744.914 - 3760.518: 99.7683% ( 3) 00:44:40.167 3760.518 - 3776.122: 99.7718% ( 3) 00:44:40.167 3776.122 - 3791.726: 99.7776% ( 5) 00:44:40.167 3791.726 - 3807.330: 99.7811% ( 3) 00:44:40.167 3807.330 - 3822.933: 99.7858% ( 4) 00:44:40.167 3822.933 - 3838.537: 99.7893% ( 3) 00:44:40.167 3838.537 - 3854.141: 99.7928% ( 3) 00:44:40.167 3854.141 - 3869.745: 99.7974% ( 4) 00:44:40.167 3869.745 - 3885.349: 99.8009% ( 3) 00:44:40.167 3885.349 - 3900.952: 99.8032% ( 2) 00:44:40.167 3900.952 - 3916.556: 99.8079% ( 4) 00:44:40.167 3916.556 - 3932.160: 99.8102% ( 2) 00:44:40.167 3932.160 - 3947.764: 99.8137% ( 3) 00:44:40.167 3947.764 - 3963.368: 99.8172% ( 3) 00:44:40.167 3963.368 - 3978.971: 99.8219% ( 4) 00:44:40.167 3978.971 - 3994.575: 99.8230% ( 1) 00:44:40.167 3994.575 - 4025.783: 99.8312% ( 7) 00:44:40.167 4025.783 - 4056.990: 99.8347% ( 3) 00:44:40.167 4056.990 - 4088.198: 99.8382% ( 3) 00:44:40.167 4088.198 - 4119.406: 99.8440% ( 5) 00:44:40.167 4119.406 - 4150.613: 99.8486% ( 4) 00:44:40.167 4150.613 - 4181.821: 99.8545% ( 5) 00:44:40.167 4181.821 - 4213.029: 99.8591% ( 4) 00:44:40.167 4213.029 - 4244.236: 99.8626% ( 3) 00:44:40.167 4244.236 - 4275.444: 99.8661% ( 3) 00:44:40.167 4275.444 - 4306.651: 99.8696% ( 3) 00:44:40.167 4306.651 - 4337.859: 99.8719% ( 2) 00:44:40.167 4337.859 - 4369.067: 99.8754% ( 3) 00:44:40.167 4369.067 - 4400.274: 99.8777% ( 2) 00:44:40.167 4400.274 - 4431.482: 99.8812% ( 3) 00:44:40.167 4431.482 - 4462.690: 99.8847% ( 3) 00:44:40.167 4462.690 - 4493.897: 99.8871% ( 2) 00:44:40.168 4493.897 - 4525.105: 99.8906% ( 3) 00:44:40.168 4525.105 - 4556.312: 99.8917% ( 1) 00:44:40.168 4556.312 - 4587.520: 99.8929% ( 1) 00:44:40.168 4587.520 - 4618.728: 99.8940% ( 1) 00:44:40.168 4618.728 - 4649.935: 99.8952% ( 1) 00:44:40.168 4649.935 - 4681.143: 99.8964% ( 1) 00:44:40.168 4712.350 - 4743.558: 99.8975% ( 1) 00:44:40.168 4743.558 - 4774.766: 99.8987% ( 1) 00:44:40.168 4774.766 - 4805.973: 99.8999% ( 1) 00:44:40.168 4805.973 - 4837.181: 99.9010% ( 1) 00:44:40.168 4868.389 - 4899.596: 99.9022% ( 1) 00:44:40.168 4899.596 - 4930.804: 99.9034% ( 1) 00:44:40.168 4930.804 - 4962.011: 99.9045% ( 1) 00:44:40.168 4962.011 - 4993.219: 99.9057% ( 1) 00:44:40.168 4993.219 - 5024.427: 99.9069% ( 1) 00:44:40.168 5055.634 - 5086.842: 99.9080% ( 1) 00:44:40.168 5086.842 - 5118.050: 99.9092% ( 1) 00:44:40.168 5118.050 - 5149.257: 99.9103% ( 1) 00:44:40.168 5180.465 - 5211.672: 99.9115% ( 1) 00:44:40.168 5211.672 - 5242.880: 99.9127% ( 1) 00:44:40.168 5242.880 - 5274.088: 99.9138% ( 1) 00:44:40.168 5274.088 - 5305.295: 99.9150% ( 1) 00:44:40.168 5305.295 - 5336.503: 99.9162% ( 1) 00:44:40.168 5367.710 - 5398.918: 99.9173% ( 1) 00:44:40.168 5398.918 - 5430.126: 99.9185% ( 1) 00:44:40.168 5430.126 - 5461.333: 99.9197% ( 1) 00:44:40.168 5461.333 - 5492.541: 99.9208% ( 1) 00:44:40.168 5492.541 - 5523.749: 99.9220% ( 1) 00:44:40.168 5554.956 - 5586.164: 99.9232% ( 1) 00:44:40.168 5586.164 - 5617.371: 99.9243% ( 1) 00:44:40.168 5617.371 - 5648.579: 99.9255% ( 1) 00:44:40.168 5710.994 - 5742.202: 99.9266% ( 1) 00:44:40.168 5742.202 - 5773.410: 99.9278% ( 1) 00:44:40.168 5804.617 - 5835.825: 99.9290% ( 1) 00:44:40.168 5835.825 - 5867.032: 99.9301% ( 1) 00:44:40.168 5867.032 - 5898.240: 99.9313% ( 1) 00:44:40.168 5929.448 - 5960.655: 99.9336% ( 2) 00:44:40.168 5991.863 - 6023.070: 99.9348% ( 1) 00:44:40.168 6023.070 - 6054.278: 99.9360% ( 1) 00:44:40.168 6054.278 - 6085.486: 99.9371% ( 1) 00:44:40.168 6085.486 - 6116.693: 99.9383% ( 1) 00:44:40.168 6116.693 - 6147.901: 99.9395% ( 1) 00:44:40.168 6147.901 - 6179.109: 99.9406% ( 1) 00:44:40.168 6210.316 - 6241.524: 99.9418% ( 1) 00:44:40.168 6241.524 - 6272.731: 99.9429% ( 1) 00:44:40.168 6272.731 - 6303.939: 99.9441% ( 1) 00:44:40.168 6303.939 - 6335.147: 99.9453% ( 1) 00:44:40.168 6366.354 - 6397.562: 99.9464% ( 1) 00:44:40.168 6397.562 - 6428.770: 99.9476% ( 1) 00:44:40.168 6428.770 - 6459.977: 99.9488% ( 1) 00:44:40.168 6491.185 - 6522.392: 99.9499% ( 1) 00:44:40.168 6522.392 - 6553.600: 99.9511% ( 1) 00:44:40.168 6584.808 - 6616.015: 99.9523% ( 1) 00:44:40.168 6616.015 - 6647.223: 99.9534% ( 1) 00:44:40.168 6647.223 - 6678.430: 99.9546% ( 1) 00:44:40.168 6678.430 - 6709.638: 99.9558% ( 1) 00:44:40.168 6709.638 - 6740.846: 99.9569% ( 1) 00:44:40.168 6740.846 - 6772.053: 99.9581% ( 1) 00:44:40.168 6803.261 - 6834.469: 99.9592% ( 1) 00:44:40.168 6834.469 - 6865.676: 99.9604% ( 1) 00:44:40.168 6865.676 - 6896.884: 99.9616% ( 1) 00:44:40.168 6896.884 - 6928.091: 99.9627% ( 1) 00:44:40.168 6928.091 - 6959.299: 99.9639% ( 1) 00:44:40.168 6990.507 - 7021.714: 99.9651% ( 1) 00:44:40.168 7021.714 - 7052.922: 99.9662% ( 1) 00:44:40.168 7052.922 - 7084.130: 99.9674% ( 1) 00:44:40.168 7084.130 - 7115.337: 99.9686% ( 1) 00:44:40.168 7146.545 - 7177.752: 99.9697% ( 1) 00:44:40.168 7177.752 - 7208.960: 99.9709% ( 1) 00:44:40.168 7208.960 - 7240.168: 99.9721% ( 1) 00:44:40.168 7240.168 - 7271.375: 99.9732% ( 1) 00:44:40.168 7271.375 - 7302.583: 99.9744% ( 1) 00:44:40.168 7333.790 - 7364.998: 99.9755% ( 1) 00:44:40.168 7364.998 - 7396.206: 99.9767% ( 1) 00:44:40.168 7396.206 - 7427.413: 99.9779% ( 1) 00:44:40.168 7427.413 - 7458.621: 99.9790% ( 1) 00:44:40.168 7458.621 - 7489.829: 99.9802% ( 1) 00:44:40.168 7521.036 - 7552.244: 99.9814% ( 1) 00:44:40.168 7552.244 - 7583.451: 99.9825% ( 1) 00:44:40.168 7583.451 - 7614.659: 99.9837% ( 1) 00:44:40.168 7614.659 - 7645.867: 99.9849% ( 1) 00:44:40.168 7645.867 - 7677.074: 99.9860% ( 1) 00:44:40.168 7708.282 - 7739.490: 99.9872% ( 1) 00:44:40.168 7739.490 - 7770.697: 99.9884% ( 1) 00:44:40.168 7770.697 - 7801.905: 99.9907% ( 2) 00:44:40.168 7833.112 - 7864.320: 99.9918% ( 1) 00:44:40.168 7864.320 - 7895.528: 99.9930% ( 1) 00:44:40.168 7895.528 - 7926.735: 99.9942% ( 1) 00:44:40.168 7926.735 - 7957.943: 99.9953% ( 1) 00:44:40.168 7989.150 - 8051.566: 99.9977% ( 2) 00:44:40.168 8051.566 - 8113.981: 99.9988% ( 1) 00:44:40.168 8113.981 - 8176.396: 100.0000% ( 1) 00:44:40.168 00:44:40.168 01:11:02 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:44:41.541 Initializing NVMe Controllers 00:44:41.541 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:41.541 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:44:41.541 Initialization complete. Launching workers. 00:44:41.541 ======================================================== 00:44:41.541 Latency(us) 00:44:41.541 Device Information : IOPS MiB/s Average min max 00:44:41.541 PCIE (0000:00:10.0) NSID 1 from core 0: 81304.92 952.79 1574.17 485.21 8996.37 00:44:41.541 ======================================================== 00:44:41.541 Total : 81304.92 952.79 1574.17 485.21 8996.37 00:44:41.541 00:44:41.541 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:41.541 ================================================================================= 00:44:41.541 1.00000% : 986.941us 00:44:41.541 10.00000% : 1248.305us 00:44:41.541 25.00000% : 1380.937us 00:44:41.541 50.00000% : 1529.173us 00:44:41.541 75.00000% : 1716.419us 00:44:41.541 90.00000% : 1895.863us 00:44:41.541 95.00000% : 2059.703us 00:44:41.541 98.00000% : 2387.383us 00:44:41.541 99.00000% : 2980.328us 00:44:41.541 99.50000% : 3666.895us 00:44:41.541 99.90000% : 5492.541us 00:44:41.541 99.99000% : 6740.846us 00:44:41.541 99.99900% : 9050.210us 00:44:41.541 99.99990% : 9050.210us 00:44:41.541 99.99999% : 9050.210us 00:44:41.541 00:44:41.541 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:41.541 ============================================================================== 00:44:41.541 Range in us Cumulative IO count 00:44:41.541 483.718 - 485.669: 0.0012% ( 1) 00:44:41.541 526.629 - 530.530: 0.0025% ( 1) 00:44:41.541 530.530 - 534.430: 0.0037% ( 1) 00:44:41.541 534.430 - 538.331: 0.0049% ( 1) 00:44:41.541 542.232 - 546.133: 0.0086% ( 3) 00:44:41.541 546.133 - 550.034: 0.0098% ( 1) 00:44:41.541 550.034 - 553.935: 0.0111% ( 1) 00:44:41.541 561.737 - 565.638: 0.0123% ( 1) 00:44:41.541 600.747 - 604.648: 0.0135% ( 1) 00:44:41.541 612.450 - 616.350: 0.0172% ( 3) 00:44:41.541 616.350 - 620.251: 0.0209% ( 3) 00:44:41.541 620.251 - 624.152: 0.0234% ( 2) 00:44:41.541 624.152 - 628.053: 0.0258% ( 2) 00:44:41.541 628.053 - 631.954: 0.0271% ( 1) 00:44:41.541 635.855 - 639.756: 0.0283% ( 1) 00:44:41.541 639.756 - 643.657: 0.0307% ( 2) 00:44:41.541 643.657 - 647.558: 0.0320% ( 1) 00:44:41.541 647.558 - 651.459: 0.0332% ( 1) 00:44:41.541 659.261 - 663.162: 0.0357% ( 2) 00:44:41.541 663.162 - 667.063: 0.0406% ( 4) 00:44:41.541 667.063 - 670.964: 0.0443% ( 3) 00:44:41.541 670.964 - 674.865: 0.0455% ( 1) 00:44:41.541 682.667 - 686.568: 0.0480% ( 2) 00:44:41.541 686.568 - 690.469: 0.0492% ( 1) 00:44:41.541 690.469 - 694.370: 0.0529% ( 3) 00:44:41.541 694.370 - 698.270: 0.0566% ( 3) 00:44:41.541 702.171 - 706.072: 0.0590% ( 2) 00:44:41.541 706.072 - 709.973: 0.0603% ( 1) 00:44:41.541 709.973 - 713.874: 0.0639% ( 3) 00:44:41.541 713.874 - 717.775: 0.0664% ( 2) 00:44:41.541 717.775 - 721.676: 0.0689% ( 2) 00:44:41.541 721.676 - 725.577: 0.0701% ( 1) 00:44:41.541 725.577 - 729.478: 0.0713% ( 1) 00:44:41.541 733.379 - 737.280: 0.0750% ( 3) 00:44:41.541 737.280 - 741.181: 0.0787% ( 3) 00:44:41.541 741.181 - 745.082: 0.0824% ( 3) 00:44:41.541 745.082 - 748.983: 0.0873% ( 4) 00:44:41.541 748.983 - 752.884: 0.0885% ( 1) 00:44:41.541 752.884 - 756.785: 0.0910% ( 2) 00:44:41.541 756.785 - 760.686: 0.0959% ( 4) 00:44:41.541 760.686 - 764.587: 0.0984% ( 2) 00:44:41.541 764.587 - 768.488: 0.1033% ( 4) 00:44:41.541 768.488 - 772.389: 0.1095% ( 5) 00:44:41.541 772.389 - 776.290: 0.1131% ( 3) 00:44:41.541 776.290 - 780.190: 0.1193% ( 5) 00:44:41.541 780.190 - 784.091: 0.1205% ( 1) 00:44:41.541 784.091 - 787.992: 0.1217% ( 1) 00:44:41.541 787.992 - 791.893: 0.1267% ( 4) 00:44:41.541 791.893 - 795.794: 0.1328% ( 5) 00:44:41.541 795.794 - 799.695: 0.1377% ( 4) 00:44:41.541 799.695 - 803.596: 0.1414% ( 3) 00:44:41.541 803.596 - 807.497: 0.1439% ( 2) 00:44:41.541 807.497 - 811.398: 0.1476% ( 3) 00:44:41.541 811.398 - 815.299: 0.1513% ( 3) 00:44:41.541 815.299 - 819.200: 0.1574% ( 5) 00:44:41.541 819.200 - 823.101: 0.1599% ( 2) 00:44:41.541 823.101 - 827.002: 0.1623% ( 2) 00:44:41.541 827.002 - 830.903: 0.1709% ( 7) 00:44:41.541 830.903 - 834.804: 0.1771% ( 5) 00:44:41.541 834.804 - 838.705: 0.1808% ( 3) 00:44:41.542 838.705 - 842.606: 0.1845% ( 3) 00:44:41.542 842.606 - 846.507: 0.1906% ( 5) 00:44:41.542 846.507 - 850.408: 0.1918% ( 1) 00:44:41.542 850.408 - 854.309: 0.2029% ( 9) 00:44:41.542 854.309 - 858.210: 0.2091% ( 5) 00:44:41.542 858.210 - 862.110: 0.2164% ( 6) 00:44:41.542 862.110 - 866.011: 0.2238% ( 6) 00:44:41.542 866.011 - 869.912: 0.2423% ( 15) 00:44:41.542 869.912 - 873.813: 0.2509% ( 7) 00:44:41.542 873.813 - 877.714: 0.2619% ( 9) 00:44:41.542 877.714 - 881.615: 0.2730% ( 9) 00:44:41.542 881.615 - 885.516: 0.2915% ( 15) 00:44:41.542 885.516 - 889.417: 0.3062% ( 12) 00:44:41.542 889.417 - 893.318: 0.3185% ( 10) 00:44:41.542 893.318 - 897.219: 0.3284% ( 8) 00:44:41.542 897.219 - 901.120: 0.3529% ( 20) 00:44:41.542 901.120 - 905.021: 0.3702% ( 14) 00:44:41.542 905.021 - 908.922: 0.3898% ( 16) 00:44:41.542 908.922 - 912.823: 0.4181% ( 23) 00:44:41.542 912.823 - 916.724: 0.4329% ( 12) 00:44:41.542 916.724 - 920.625: 0.4563% ( 19) 00:44:41.542 920.625 - 924.526: 0.4808% ( 20) 00:44:41.542 924.526 - 928.427: 0.4993% ( 15) 00:44:41.542 928.427 - 932.328: 0.5313% ( 26) 00:44:41.542 932.328 - 936.229: 0.5694% ( 31) 00:44:41.542 936.229 - 940.130: 0.5891% ( 16) 00:44:41.542 940.130 - 944.030: 0.6038% ( 12) 00:44:41.542 944.030 - 947.931: 0.6321% ( 23) 00:44:41.542 947.931 - 951.832: 0.6629% ( 25) 00:44:41.542 951.832 - 955.733: 0.7022% ( 32) 00:44:41.542 955.733 - 959.634: 0.7342% ( 26) 00:44:41.542 959.634 - 963.535: 0.7748% ( 33) 00:44:41.542 963.535 - 967.436: 0.8055% ( 25) 00:44:41.542 967.436 - 971.337: 0.8412% ( 29) 00:44:41.542 971.337 - 975.238: 0.8990% ( 47) 00:44:41.542 975.238 - 979.139: 0.9383% ( 32) 00:44:41.542 979.139 - 983.040: 0.9666% ( 23) 00:44:41.542 983.040 - 986.941: 1.0084% ( 34) 00:44:41.542 986.941 - 990.842: 1.0527% ( 36) 00:44:41.542 990.842 - 994.743: 1.1142% ( 50) 00:44:41.542 994.743 - 998.644: 1.1511% ( 30) 00:44:41.542 998.644 - 1006.446: 1.2421% ( 74) 00:44:41.542 1006.446 - 1014.248: 1.3405% ( 80) 00:44:41.542 1014.248 - 1022.050: 1.4561% ( 94) 00:44:41.542 1022.050 - 1029.851: 1.5569% ( 82) 00:44:41.542 1029.851 - 1037.653: 1.6811% ( 101) 00:44:41.542 1037.653 - 1045.455: 1.7918% ( 90) 00:44:41.542 1045.455 - 1053.257: 1.9381% ( 119) 00:44:41.542 1053.257 - 1061.059: 2.0734% ( 110) 00:44:41.542 1061.059 - 1068.861: 2.2148% ( 115) 00:44:41.542 1068.861 - 1076.663: 2.3489% ( 109) 00:44:41.542 1076.663 - 1084.465: 2.5321% ( 149) 00:44:41.542 1084.465 - 1092.267: 2.6945% ( 132) 00:44:41.542 1092.267 - 1100.069: 2.8974% ( 165) 00:44:41.542 1100.069 - 1107.870: 3.0954% ( 161) 00:44:41.542 1107.870 - 1115.672: 3.3167% ( 180) 00:44:41.542 1115.672 - 1123.474: 3.5664% ( 203) 00:44:41.542 1123.474 - 1131.276: 3.8492% ( 230) 00:44:41.542 1131.276 - 1139.078: 4.0989% ( 203) 00:44:41.542 1139.078 - 1146.880: 4.3916% ( 238) 00:44:41.542 1146.880 - 1154.682: 4.7445% ( 287) 00:44:41.542 1154.682 - 1162.484: 5.0876% ( 279) 00:44:41.542 1162.484 - 1170.286: 5.4689% ( 310) 00:44:41.542 1170.286 - 1178.088: 5.8353% ( 298) 00:44:41.542 1178.088 - 1185.890: 6.2092% ( 304) 00:44:41.542 1185.890 - 1193.691: 6.6064% ( 323) 00:44:41.542 1193.691 - 1201.493: 7.0602% ( 369) 00:44:41.542 1201.493 - 1209.295: 7.5029% ( 360) 00:44:41.542 1209.295 - 1217.097: 7.9752% ( 384) 00:44:41.542 1217.097 - 1224.899: 8.4388% ( 377) 00:44:41.542 1224.899 - 1232.701: 8.9664% ( 429) 00:44:41.542 1232.701 - 1240.503: 9.5431% ( 469) 00:44:41.542 1240.503 - 1248.305: 10.1261% ( 474) 00:44:41.542 1248.305 - 1256.107: 10.7483% ( 506) 00:44:41.542 1256.107 - 1263.909: 11.4149% ( 542) 00:44:41.542 1263.909 - 1271.710: 12.1564% ( 603) 00:44:41.542 1271.710 - 1279.512: 12.8820% ( 590) 00:44:41.542 1279.512 - 1287.314: 13.7072% ( 671) 00:44:41.542 1287.314 - 1295.116: 14.7045% ( 811) 00:44:41.542 1295.116 - 1302.918: 15.6367% ( 758) 00:44:41.542 1302.918 - 1310.720: 16.6439% ( 819) 00:44:41.542 1310.720 - 1318.522: 17.6044% ( 781) 00:44:41.542 1318.522 - 1326.324: 18.6054% ( 814) 00:44:41.542 1326.324 - 1334.126: 19.5069% ( 733) 00:44:41.542 1334.126 - 1341.928: 20.6100% ( 897) 00:44:41.542 1341.928 - 1349.730: 21.5483% ( 763) 00:44:41.542 1349.730 - 1357.531: 22.5432% ( 809) 00:44:41.542 1357.531 - 1365.333: 23.5406% ( 811) 00:44:41.542 1365.333 - 1373.135: 24.6597% ( 910) 00:44:41.542 1373.135 - 1380.937: 25.7825% ( 913) 00:44:41.542 1380.937 - 1388.739: 26.8647% ( 880) 00:44:41.542 1388.739 - 1396.541: 27.9506% ( 883) 00:44:41.542 1396.541 - 1404.343: 29.1090% ( 942) 00:44:41.542 1404.343 - 1412.145: 30.3056% ( 973) 00:44:41.542 1412.145 - 1419.947: 31.5378% ( 1002) 00:44:41.542 1419.947 - 1427.749: 32.7935% ( 1021) 00:44:41.542 1427.749 - 1435.550: 33.9888% ( 972) 00:44:41.542 1435.550 - 1443.352: 35.3625% ( 1117) 00:44:41.542 1443.352 - 1451.154: 36.8382% ( 1200) 00:44:41.542 1451.154 - 1458.956: 38.1799% ( 1091) 00:44:41.542 1458.956 - 1466.758: 39.4798% ( 1057) 00:44:41.542 1466.758 - 1474.560: 40.7674% ( 1047) 00:44:41.542 1474.560 - 1482.362: 42.5088% ( 1416) 00:44:41.542 1482.362 - 1490.164: 44.0644% ( 1265) 00:44:41.542 1490.164 - 1497.966: 45.7173% ( 1344) 00:44:41.542 1497.966 - 1505.768: 47.1623% ( 1175) 00:44:41.542 1505.768 - 1513.570: 48.5728% ( 1147) 00:44:41.542 1513.570 - 1521.371: 49.9268% ( 1101) 00:44:41.542 1521.371 - 1529.173: 51.3460% ( 1154) 00:44:41.542 1529.173 - 1536.975: 52.6631% ( 1071) 00:44:41.542 1536.975 - 1544.777: 54.0368% ( 1117) 00:44:41.542 1544.777 - 1552.579: 55.2813% ( 1012) 00:44:41.542 1552.579 - 1560.381: 56.4558% ( 955) 00:44:41.542 1560.381 - 1568.183: 57.5995% ( 930) 00:44:41.542 1568.183 - 1575.985: 58.7186% ( 910) 00:44:41.542 1575.985 - 1583.787: 59.8942% ( 956) 00:44:41.542 1583.787 - 1591.589: 61.0060% ( 904) 00:44:41.542 1591.589 - 1599.390: 61.9972% ( 806) 00:44:41.542 1599.390 - 1607.192: 63.0413% ( 849) 00:44:41.542 1607.192 - 1614.994: 64.0325% ( 806) 00:44:41.542 1614.994 - 1622.796: 64.9991% ( 786) 00:44:41.542 1622.796 - 1630.598: 66.0247% ( 834) 00:44:41.542 1630.598 - 1638.400: 66.7380% ( 580) 00:44:41.542 1638.400 - 1646.202: 67.6997% ( 782) 00:44:41.542 1646.202 - 1654.004: 68.6011% ( 733) 00:44:41.542 1654.004 - 1661.806: 69.5235% ( 750) 00:44:41.542 1661.806 - 1669.608: 70.3265% ( 653) 00:44:41.542 1669.608 - 1677.410: 71.1025% ( 631) 00:44:41.542 1677.410 - 1685.211: 71.8674% ( 622) 00:44:41.542 1685.211 - 1693.013: 72.7824% ( 744) 00:44:41.542 1693.013 - 1700.815: 73.6715% ( 723) 00:44:41.542 1700.815 - 1708.617: 74.5213% ( 691) 00:44:41.542 1708.617 - 1716.419: 75.2666% ( 606) 00:44:41.542 1716.419 - 1724.221: 75.9872% ( 586) 00:44:41.542 1724.221 - 1732.023: 76.7116% ( 589) 00:44:41.542 1732.023 - 1739.825: 77.4457% ( 597) 00:44:41.542 1739.825 - 1747.627: 78.1947% ( 609) 00:44:41.542 1747.627 - 1755.429: 78.8858% ( 562) 00:44:41.542 1755.429 - 1763.230: 79.5954% ( 577) 00:44:41.542 1763.230 - 1771.032: 80.3161% ( 586) 00:44:41.542 1771.032 - 1778.834: 80.9568% ( 521) 00:44:41.542 1778.834 - 1786.636: 81.6639% ( 575) 00:44:41.542 1786.636 - 1794.438: 82.4891% ( 671) 00:44:41.542 1794.438 - 1802.240: 83.2971% ( 657) 00:44:41.542 1802.240 - 1810.042: 84.0116% ( 581) 00:44:41.542 1810.042 - 1817.844: 84.6117% ( 488) 00:44:41.542 1817.844 - 1825.646: 85.2598% ( 527) 00:44:41.542 1825.646 - 1833.448: 85.8120% ( 449) 00:44:41.542 1833.448 - 1841.250: 86.3912% ( 471) 00:44:41.542 1841.250 - 1849.051: 87.0085% ( 502) 00:44:41.542 1849.051 - 1856.853: 87.5521% ( 442) 00:44:41.542 1856.853 - 1864.655: 88.1252% ( 466) 00:44:41.542 1864.655 - 1872.457: 88.6503% ( 427) 00:44:41.542 1872.457 - 1880.259: 89.1558% ( 411) 00:44:41.542 1880.259 - 1888.061: 89.6046% ( 365) 00:44:41.542 1888.061 - 1895.863: 90.0326% ( 348) 00:44:41.542 1895.863 - 1903.665: 90.4409% ( 332) 00:44:41.542 1903.665 - 1911.467: 90.7889% ( 283) 00:44:41.542 1911.467 - 1919.269: 91.1419% ( 287) 00:44:41.542 1919.269 - 1927.070: 91.4345% ( 238) 00:44:41.542 1927.070 - 1934.872: 91.7826% ( 283) 00:44:41.542 1934.872 - 1942.674: 91.9941% ( 172) 00:44:41.542 1942.674 - 1950.476: 92.2339% ( 195) 00:44:41.542 1950.476 - 1958.278: 92.5069% ( 222) 00:44:41.542 1958.278 - 1966.080: 92.7762% ( 219) 00:44:41.542 1966.080 - 1973.882: 93.0197% ( 198) 00:44:41.542 1973.882 - 1981.684: 93.3038% ( 231) 00:44:41.542 1981.684 - 1989.486: 93.5412% ( 193) 00:44:41.542 1989.486 - 1997.288: 93.7392% ( 161) 00:44:41.542 1997.288 - 2012.891: 94.1499% ( 334) 00:44:41.542 2012.891 - 2028.495: 94.5164% ( 298) 00:44:41.542 2028.495 - 2044.099: 94.8472% ( 269) 00:44:41.542 2044.099 - 2059.703: 95.1719% ( 264) 00:44:41.542 2059.703 - 2075.307: 95.4695% ( 242) 00:44:41.542 2075.307 - 2090.910: 95.7732% ( 247) 00:44:41.542 2090.910 - 2106.514: 96.0548% ( 229) 00:44:41.542 2106.514 - 2122.118: 96.2959% ( 196) 00:44:41.542 2122.118 - 2137.722: 96.5332% ( 193) 00:44:41.542 2137.722 - 2153.326: 96.7140% ( 147) 00:44:41.542 2153.326 - 2168.930: 96.8960% ( 148) 00:44:41.542 2168.930 - 2184.533: 97.0694% ( 141) 00:44:41.542 2184.533 - 2200.137: 97.2158% ( 119) 00:44:41.543 2200.137 - 2215.741: 97.3916% ( 143) 00:44:41.543 2215.741 - 2231.345: 97.5011% ( 89) 00:44:41.543 2231.345 - 2246.949: 97.5835% ( 67) 00:44:41.543 2246.949 - 2262.552: 97.6560% ( 59) 00:44:41.543 2262.552 - 2278.156: 97.7200% ( 52) 00:44:41.543 2278.156 - 2293.760: 97.7753% ( 45) 00:44:41.543 2293.760 - 2309.364: 97.8307% ( 45) 00:44:41.543 2309.364 - 2324.968: 97.8737% ( 35) 00:44:41.543 2324.968 - 2340.571: 97.9167% ( 35) 00:44:41.543 2340.571 - 2356.175: 97.9487% ( 26) 00:44:41.543 2356.175 - 2371.779: 97.9954% ( 38) 00:44:41.543 2371.779 - 2387.383: 98.0385% ( 35) 00:44:41.543 2387.383 - 2402.987: 98.0754% ( 30) 00:44:41.543 2402.987 - 2418.590: 98.1098% ( 28) 00:44:41.543 2418.590 - 2434.194: 98.1479% ( 31) 00:44:41.543 2434.194 - 2449.798: 98.1848% ( 30) 00:44:41.543 2449.798 - 2465.402: 98.2279% ( 35) 00:44:41.543 2465.402 - 2481.006: 98.2623% ( 28) 00:44:41.543 2481.006 - 2496.610: 98.2992% ( 30) 00:44:41.543 2496.610 - 2512.213: 98.3361% ( 30) 00:44:41.543 2512.213 - 2527.817: 98.3742% ( 31) 00:44:41.543 2527.817 - 2543.421: 98.4111% ( 30) 00:44:41.543 2543.421 - 2559.025: 98.4554% ( 36) 00:44:41.543 2559.025 - 2574.629: 98.4874% ( 26) 00:44:41.543 2574.629 - 2590.232: 98.5181% ( 25) 00:44:41.543 2590.232 - 2605.836: 98.5439% ( 21) 00:44:41.543 2605.836 - 2621.440: 98.5698% ( 21) 00:44:41.543 2621.440 - 2637.044: 98.5907% ( 17) 00:44:41.543 2637.044 - 2652.648: 98.6128% ( 18) 00:44:41.543 2652.648 - 2668.251: 98.6325% ( 16) 00:44:41.543 2668.251 - 2683.855: 98.6472% ( 12) 00:44:41.543 2683.855 - 2699.459: 98.6657% ( 15) 00:44:41.543 2699.459 - 2715.063: 98.6817% ( 13) 00:44:41.543 2715.063 - 2730.667: 98.7013% ( 16) 00:44:41.543 2730.667 - 2746.270: 98.7223% ( 17) 00:44:41.543 2746.270 - 2761.874: 98.7333% ( 9) 00:44:41.543 2761.874 - 2777.478: 98.7555% ( 18) 00:44:41.543 2777.478 - 2793.082: 98.7751% ( 16) 00:44:41.543 2793.082 - 2808.686: 98.7911% ( 13) 00:44:41.543 2808.686 - 2824.290: 98.8120% ( 17) 00:44:41.543 2824.290 - 2839.893: 98.8268% ( 12) 00:44:41.543 2839.893 - 2855.497: 98.8428% ( 13) 00:44:41.543 2855.497 - 2871.101: 98.8588% ( 13) 00:44:41.543 2871.101 - 2886.705: 98.8723% ( 11) 00:44:41.543 2886.705 - 2902.309: 98.8883% ( 13) 00:44:41.543 2902.309 - 2917.912: 98.9055% ( 14) 00:44:41.543 2917.912 - 2933.516: 98.9252% ( 16) 00:44:41.543 2933.516 - 2949.120: 98.9387% ( 11) 00:44:41.543 2949.120 - 2964.724: 98.9694% ( 25) 00:44:41.543 2964.724 - 2980.328: 99.0014% ( 26) 00:44:41.543 2980.328 - 2995.931: 99.0260% ( 20) 00:44:41.543 2995.931 - 3011.535: 99.0481% ( 18) 00:44:41.543 3011.535 - 3027.139: 99.0826% ( 28) 00:44:41.543 3027.139 - 3042.743: 99.1035% ( 17) 00:44:41.543 3042.743 - 3058.347: 99.1207% ( 14) 00:44:41.543 3058.347 - 3073.950: 99.1404% ( 16) 00:44:41.543 3073.950 - 3089.554: 99.1588% ( 15) 00:44:41.543 3089.554 - 3105.158: 99.1736% ( 12) 00:44:41.543 3105.158 - 3120.762: 99.1945% ( 17) 00:44:41.543 3120.762 - 3136.366: 99.2105% ( 13) 00:44:41.543 3136.366 - 3151.970: 99.2215% ( 9) 00:44:41.543 3151.970 - 3167.573: 99.2449% ( 19) 00:44:41.543 3167.573 - 3183.177: 99.2609% ( 13) 00:44:41.543 3183.177 - 3198.781: 99.2695% ( 7) 00:44:41.543 3198.781 - 3214.385: 99.2818% ( 10) 00:44:41.543 3214.385 - 3229.989: 99.2929% ( 9) 00:44:41.543 3229.989 - 3245.592: 99.3003% ( 6) 00:44:41.543 3245.592 - 3261.196: 99.3101% ( 8) 00:44:41.543 3261.196 - 3276.800: 99.3212% ( 9) 00:44:41.543 3276.800 - 3292.404: 99.3335% ( 10) 00:44:41.543 3292.404 - 3308.008: 99.3408% ( 6) 00:44:41.543 3308.008 - 3323.611: 99.3470% ( 5) 00:44:41.543 3323.611 - 3339.215: 99.3568% ( 8) 00:44:41.543 3339.215 - 3354.819: 99.3617% ( 4) 00:44:41.543 3354.819 - 3370.423: 99.3703% ( 7) 00:44:41.543 3370.423 - 3386.027: 99.3740% ( 3) 00:44:41.543 3386.027 - 3401.630: 99.3802% ( 5) 00:44:41.543 3401.630 - 3417.234: 99.3888% ( 7) 00:44:41.543 3417.234 - 3432.838: 99.3999% ( 9) 00:44:41.543 3432.838 - 3448.442: 99.4048% ( 4) 00:44:41.543 3448.442 - 3464.046: 99.4122% ( 6) 00:44:41.543 3464.046 - 3479.650: 99.4220% ( 8) 00:44:41.543 3479.650 - 3495.253: 99.4294% ( 6) 00:44:41.543 3495.253 - 3510.857: 99.4380% ( 7) 00:44:41.543 3510.857 - 3526.461: 99.4466% ( 7) 00:44:41.543 3526.461 - 3542.065: 99.4527% ( 5) 00:44:41.543 3542.065 - 3557.669: 99.4589% ( 5) 00:44:41.543 3557.669 - 3573.272: 99.4687% ( 8) 00:44:41.543 3573.272 - 3588.876: 99.4737% ( 4) 00:44:41.543 3588.876 - 3604.480: 99.4786% ( 4) 00:44:41.543 3604.480 - 3620.084: 99.4823% ( 3) 00:44:41.543 3620.084 - 3635.688: 99.4884% ( 5) 00:44:41.543 3635.688 - 3651.291: 99.4970% ( 7) 00:44:41.543 3651.291 - 3666.895: 99.5007% ( 3) 00:44:41.543 3666.895 - 3682.499: 99.5081% ( 6) 00:44:41.543 3682.499 - 3698.103: 99.5155% ( 6) 00:44:41.543 3698.103 - 3713.707: 99.5192% ( 3) 00:44:41.543 3713.707 - 3729.310: 99.5228% ( 3) 00:44:41.543 3729.310 - 3744.914: 99.5290% ( 5) 00:44:41.543 3744.914 - 3760.518: 99.5388% ( 8) 00:44:41.543 3760.518 - 3776.122: 99.5450% ( 5) 00:44:41.543 3776.122 - 3791.726: 99.5487% ( 3) 00:44:41.543 3791.726 - 3807.330: 99.5560% ( 6) 00:44:41.543 3807.330 - 3822.933: 99.5610% ( 4) 00:44:41.543 3822.933 - 3838.537: 99.5659% ( 4) 00:44:41.543 3838.537 - 3854.141: 99.5720% ( 5) 00:44:41.543 3854.141 - 3869.745: 99.5770% ( 4) 00:44:41.543 3869.745 - 3885.349: 99.5819% ( 4) 00:44:41.543 3885.349 - 3900.952: 99.5856% ( 3) 00:44:41.543 3900.952 - 3916.556: 99.5880% ( 2) 00:44:41.543 3916.556 - 3932.160: 99.5966% ( 7) 00:44:41.543 3932.160 - 3947.764: 99.5991% ( 2) 00:44:41.543 3947.764 - 3963.368: 99.6028% ( 3) 00:44:41.543 3963.368 - 3978.971: 99.6040% ( 1) 00:44:41.543 3978.971 - 3994.575: 99.6102% ( 5) 00:44:41.543 3994.575 - 4025.783: 99.6163% ( 5) 00:44:41.543 4025.783 - 4056.990: 99.6212% ( 4) 00:44:41.543 4056.990 - 4088.198: 99.6458% ( 20) 00:44:41.543 4088.198 - 4119.406: 99.6606% ( 12) 00:44:41.543 4119.406 - 4150.613: 99.6692% ( 7) 00:44:41.543 4150.613 - 4181.821: 99.6901% ( 17) 00:44:41.543 4181.821 - 4213.029: 99.7061% ( 13) 00:44:41.543 4213.029 - 4244.236: 99.7147% ( 7) 00:44:41.543 4244.236 - 4275.444: 99.7221% ( 6) 00:44:41.543 4275.444 - 4306.651: 99.7307% ( 7) 00:44:41.543 4306.651 - 4337.859: 99.7381% ( 6) 00:44:41.543 4337.859 - 4369.067: 99.7442% ( 5) 00:44:41.543 4369.067 - 4400.274: 99.7516% ( 6) 00:44:41.543 4400.274 - 4431.482: 99.7602% ( 7) 00:44:41.543 4431.482 - 4462.690: 99.7713% ( 9) 00:44:41.543 4462.690 - 4493.897: 99.7786% ( 6) 00:44:41.543 4493.897 - 4525.105: 99.7836% ( 4) 00:44:41.543 4525.105 - 4556.312: 99.7872% ( 3) 00:44:41.543 4556.312 - 4587.520: 99.7909% ( 3) 00:44:41.543 4587.520 - 4618.728: 99.7959% ( 4) 00:44:41.543 4618.728 - 4649.935: 99.8020% ( 5) 00:44:41.543 4649.935 - 4681.143: 99.8057% ( 3) 00:44:41.543 4681.143 - 4712.350: 99.8106% ( 4) 00:44:41.543 4712.350 - 4743.558: 99.8131% ( 2) 00:44:41.543 4743.558 - 4774.766: 99.8168% ( 3) 00:44:41.543 4774.766 - 4805.973: 99.8192% ( 2) 00:44:41.543 4805.973 - 4837.181: 99.8229% ( 3) 00:44:41.543 4837.181 - 4868.389: 99.8241% ( 1) 00:44:41.543 4868.389 - 4899.596: 99.8266% ( 2) 00:44:41.543 4899.596 - 4930.804: 99.8291% ( 2) 00:44:41.543 4930.804 - 4962.011: 99.8315% ( 2) 00:44:41.543 4962.011 - 4993.219: 99.8364% ( 4) 00:44:41.543 4993.219 - 5024.427: 99.8377% ( 1) 00:44:41.543 5024.427 - 5055.634: 99.8414% ( 3) 00:44:41.543 5055.634 - 5086.842: 99.8438% ( 2) 00:44:41.543 5086.842 - 5118.050: 99.8475% ( 3) 00:44:41.543 5118.050 - 5149.257: 99.8537% ( 5) 00:44:41.543 5149.257 - 5180.465: 99.8573% ( 3) 00:44:41.543 5180.465 - 5211.672: 99.8598% ( 2) 00:44:41.543 5211.672 - 5242.880: 99.8635% ( 3) 00:44:41.543 5242.880 - 5274.088: 99.8647% ( 1) 00:44:41.543 5274.088 - 5305.295: 99.8660% ( 1) 00:44:41.543 5305.295 - 5336.503: 99.8844% ( 15) 00:44:41.543 5336.503 - 5367.710: 99.8869% ( 2) 00:44:41.543 5367.710 - 5398.918: 99.8905% ( 3) 00:44:41.543 5398.918 - 5430.126: 99.8955% ( 4) 00:44:41.543 5430.126 - 5461.333: 99.8979% ( 2) 00:44:41.543 5461.333 - 5492.541: 99.9004% ( 2) 00:44:41.543 5492.541 - 5523.749: 99.9065% ( 5) 00:44:41.543 5523.749 - 5554.956: 99.9090% ( 2) 00:44:41.543 5554.956 - 5586.164: 99.9115% ( 2) 00:44:41.543 5586.164 - 5617.371: 99.9127% ( 1) 00:44:41.543 5617.371 - 5648.579: 99.9164% ( 3) 00:44:41.543 5648.579 - 5679.787: 99.9188% ( 2) 00:44:41.543 5679.787 - 5710.994: 99.9213% ( 2) 00:44:41.543 5710.994 - 5742.202: 99.9238% ( 2) 00:44:41.543 5742.202 - 5773.410: 99.9274% ( 3) 00:44:41.543 5773.410 - 5804.617: 99.9299% ( 2) 00:44:41.543 5804.617 - 5835.825: 99.9324% ( 2) 00:44:41.543 5835.825 - 5867.032: 99.9348% ( 2) 00:44:41.543 5867.032 - 5898.240: 99.9373% ( 2) 00:44:41.543 5898.240 - 5929.448: 99.9422% ( 4) 00:44:41.543 5929.448 - 5960.655: 99.9447% ( 2) 00:44:41.543 5960.655 - 5991.863: 99.9459% ( 1) 00:44:41.543 5991.863 - 6023.070: 99.9471% ( 1) 00:44:41.543 6023.070 - 6054.278: 99.9483% ( 1) 00:44:41.543 6054.278 - 6085.486: 99.9508% ( 2) 00:44:41.543 6085.486 - 6116.693: 99.9533% ( 2) 00:44:41.543 6116.693 - 6147.901: 99.9545% ( 1) 00:44:41.543 6147.901 - 6179.109: 99.9570% ( 2) 00:44:41.544 6179.109 - 6210.316: 99.9594% ( 2) 00:44:41.544 6210.316 - 6241.524: 99.9606% ( 1) 00:44:41.544 6241.524 - 6272.731: 99.9619% ( 1) 00:44:41.544 6272.731 - 6303.939: 99.9631% ( 1) 00:44:41.544 6303.939 - 6335.147: 99.9643% ( 1) 00:44:41.544 6335.147 - 6366.354: 99.9668% ( 2) 00:44:41.544 6366.354 - 6397.562: 99.9680% ( 1) 00:44:41.544 6397.562 - 6428.770: 99.9717% ( 3) 00:44:41.544 6428.770 - 6459.977: 99.9729% ( 1) 00:44:41.544 6459.977 - 6491.185: 99.9742% ( 1) 00:44:41.544 6491.185 - 6522.392: 99.9766% ( 2) 00:44:41.544 6522.392 - 6553.600: 99.9791% ( 2) 00:44:41.544 6553.600 - 6584.808: 99.9816% ( 2) 00:44:41.544 6584.808 - 6616.015: 99.9840% ( 2) 00:44:41.544 6616.015 - 6647.223: 99.9865% ( 2) 00:44:41.544 6647.223 - 6678.430: 99.9877% ( 1) 00:44:41.544 6678.430 - 6709.638: 99.9889% ( 1) 00:44:41.544 6709.638 - 6740.846: 99.9902% ( 1) 00:44:41.544 6740.846 - 6772.053: 99.9914% ( 1) 00:44:41.544 6772.053 - 6803.261: 99.9926% ( 1) 00:44:41.544 7052.922 - 7084.130: 99.9951% ( 2) 00:44:41.544 8051.566 - 8113.981: 99.9975% ( 2) 00:44:41.544 8800.549 - 8862.964: 99.9988% ( 1) 00:44:41.544 8987.794 - 9050.210: 100.0000% ( 1) 00:44:41.544 00:44:41.544 01:11:04 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:44:41.544 00:44:41.544 real 0m2.747s 00:44:41.544 user 0m2.270s 00:44:41.544 sys 0m0.339s 00:44:41.544 01:11:04 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:41.544 01:11:04 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:44:41.544 ************************************ 00:44:41.544 END TEST nvme_perf 00:44:41.544 ************************************ 00:44:41.544 01:11:04 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:44:41.544 01:11:04 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:44:41.544 01:11:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:41.544 01:11:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:41.802 ************************************ 00:44:41.802 START TEST nvme_hello_world 00:44:41.802 ************************************ 00:44:41.802 01:11:04 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:44:42.061 Initializing NVMe Controllers 00:44:42.061 Attached to 0000:00:10.0 00:44:42.061 Namespace ID: 1 size: 5GB 00:44:42.061 Initialization complete. 00:44:42.061 INFO: using host memory buffer for IO 00:44:42.061 Hello world! 00:44:42.061 00:44:42.061 real 0m0.382s 00:44:42.061 user 0m0.147s 00:44:42.061 sys 0m0.167s 00:44:42.061 01:11:04 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:42.061 ************************************ 00:44:42.061 END TEST nvme_hello_world 00:44:42.061 ************************************ 00:44:42.061 01:11:04 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:44:42.061 01:11:04 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:44:42.061 01:11:04 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:42.061 01:11:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:42.061 01:11:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:42.061 ************************************ 00:44:42.061 START TEST nvme_sgl 00:44:42.061 ************************************ 00:44:42.061 01:11:04 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:44:42.319 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:44:42.319 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:44:42.319 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:44:42.577 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:44:42.577 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:44:42.577 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:44:42.577 NVMe Readv/Writev Request test 00:44:42.577 Attached to 0000:00:10.0 00:44:42.577 0000:00:10.0: build_io_request_2 test passed 00:44:42.577 0000:00:10.0: build_io_request_4 test passed 00:44:42.577 0000:00:10.0: build_io_request_5 test passed 00:44:42.577 0000:00:10.0: build_io_request_6 test passed 00:44:42.577 0000:00:10.0: build_io_request_7 test passed 00:44:42.577 0000:00:10.0: build_io_request_10 test passed 00:44:42.577 Cleaning up... 00:44:42.577 00:44:42.577 real 0m0.419s 00:44:42.577 user 0m0.180s 00:44:42.577 sys 0m0.171s 00:44:42.577 01:11:05 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:42.577 01:11:05 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:44:42.577 ************************************ 00:44:42.577 END TEST nvme_sgl 00:44:42.577 ************************************ 00:44:42.577 01:11:05 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:44:42.577 01:11:05 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:42.577 01:11:05 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:42.577 01:11:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:42.577 ************************************ 00:44:42.577 START TEST nvme_e2edp 00:44:42.577 ************************************ 00:44:42.577 01:11:05 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:44:42.837 NVMe Write/Read with End-to-End data protection test 00:44:42.837 Attached to 0000:00:10.0 00:44:42.837 Cleaning up... 00:44:43.096 00:44:43.096 real 0m0.389s 00:44:43.096 user 0m0.137s 00:44:43.096 sys 0m0.174s 00:44:43.096 01:11:05 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:43.096 ************************************ 00:44:43.096 END TEST nvme_e2edp 00:44:43.096 01:11:05 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:44:43.096 ************************************ 00:44:43.096 01:11:05 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:44:43.096 01:11:05 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:43.096 01:11:05 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:43.096 01:11:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:43.096 ************************************ 00:44:43.096 START TEST nvme_reserve 00:44:43.096 ************************************ 00:44:43.096 01:11:05 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:44:43.355 ===================================================== 00:44:43.355 NVMe Controller at PCI bus 0, device 16, function 0 00:44:43.355 ===================================================== 00:44:43.355 Reservations: Not Supported 00:44:43.355 Reservation test passed 00:44:43.355 00:44:43.355 real 0m0.359s 00:44:43.355 user 0m0.121s 00:44:43.355 sys 0m0.172s 00:44:43.355 01:11:05 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:43.355 01:11:05 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:44:43.355 ************************************ 00:44:43.355 END TEST nvme_reserve 00:44:43.355 ************************************ 00:44:43.355 01:11:05 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:44:43.355 01:11:05 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:43.355 01:11:05 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:43.355 01:11:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:43.355 ************************************ 00:44:43.355 START TEST nvme_err_injection 00:44:43.355 ************************************ 00:44:43.355 01:11:05 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:44:43.923 NVMe Error Injection test 00:44:43.923 Attached to 0000:00:10.0 00:44:43.923 0000:00:10.0: get features failed as expected 00:44:43.923 0000:00:10.0: get features successfully as expected 00:44:43.923 0000:00:10.0: read failed as expected 00:44:43.923 0000:00:10.0: read successfully as expected 00:44:43.923 Cleaning up... 00:44:43.923 00:44:43.923 real 0m0.417s 00:44:43.923 user 0m0.118s 00:44:43.923 sys 0m0.198s 00:44:43.923 01:11:06 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:43.923 01:11:06 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:44:43.923 ************************************ 00:44:43.923 END TEST nvme_err_injection 00:44:43.923 ************************************ 00:44:43.923 01:11:06 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:44:43.923 01:11:06 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:44:43.923 01:11:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:43.923 01:11:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:43.923 ************************************ 00:44:43.923 START TEST nvme_overhead 00:44:43.923 ************************************ 00:44:43.923 01:11:06 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:44:45.302 Initializing NVMe Controllers 00:44:45.302 Attached to 0000:00:10.0 00:44:45.302 Initialization complete. Launching workers. 00:44:45.302 submit (in ns) avg, min, max = 14067.7, 11725.7, 693703.8 00:44:45.302 complete (in ns) avg, min, max = 9574.3, 7720.0, 936528.6 00:44:45.302 00:44:45.302 Submit histogram 00:44:45.302 ================ 00:44:45.302 Range in us Cumulative Count 00:44:45.302 11.703 - 11.764: 0.0400% ( 3) 00:44:45.302 11.764 - 11.825: 0.0932% ( 4) 00:44:45.302 11.825 - 11.886: 0.3330% ( 18) 00:44:45.302 11.886 - 11.947: 1.3586% ( 77) 00:44:45.302 11.947 - 12.008: 3.6361% ( 171) 00:44:45.302 12.008 - 12.069: 7.9782% ( 326) 00:44:45.302 12.069 - 12.130: 14.0117% ( 453) 00:44:45.302 12.130 - 12.190: 21.5637% ( 567) 00:44:45.302 12.190 - 12.251: 27.9435% ( 479) 00:44:45.302 12.251 - 12.312: 32.9782% ( 378) 00:44:45.302 12.312 - 12.373: 36.9606% ( 299) 00:44:45.302 12.373 - 12.434: 41.2893% ( 325) 00:44:45.302 12.434 - 12.495: 46.4172% ( 385) 00:44:45.302 12.495 - 12.556: 52.4907% ( 456) 00:44:45.302 12.556 - 12.617: 58.4044% ( 444) 00:44:45.302 12.617 - 12.678: 63.4390% ( 378) 00:44:45.302 12.678 - 12.739: 66.8087% ( 253) 00:44:45.302 12.739 - 12.800: 69.3394% ( 190) 00:44:45.302 12.800 - 12.861: 70.8444% ( 113) 00:44:45.302 12.861 - 12.922: 72.1897% ( 101) 00:44:45.302 12.922 - 12.983: 73.0021% ( 61) 00:44:45.302 12.983 - 13.044: 73.5615% ( 42) 00:44:45.302 13.044 - 13.105: 73.8945% ( 25) 00:44:45.302 13.105 - 13.166: 74.1076% ( 16) 00:44:45.302 13.166 - 13.227: 74.2808% ( 13) 00:44:45.302 13.227 - 13.288: 74.3474% ( 5) 00:44:45.302 13.288 - 13.349: 74.3873% ( 3) 00:44:45.302 13.349 - 13.410: 74.4539% ( 5) 00:44:45.302 13.410 - 13.470: 74.6271% ( 13) 00:44:45.302 13.470 - 13.531: 74.9068% ( 21) 00:44:45.302 13.531 - 13.592: 75.0799% ( 13) 00:44:45.302 13.592 - 13.653: 75.2397% ( 12) 00:44:45.302 13.653 - 13.714: 75.3063% ( 5) 00:44:45.302 13.714 - 13.775: 75.3596% ( 4) 00:44:45.302 13.775 - 13.836: 75.3996% ( 3) 00:44:45.302 13.836 - 13.897: 75.4928% ( 7) 00:44:45.302 13.897 - 13.958: 75.5727% ( 6) 00:44:45.302 13.958 - 14.019: 75.6393% ( 5) 00:44:45.302 14.019 - 14.080: 75.6926% ( 4) 00:44:45.302 14.080 - 14.141: 75.7192% ( 2) 00:44:45.302 14.141 - 14.202: 75.7459% ( 2) 00:44:45.302 14.202 - 14.263: 75.7725% ( 2) 00:44:45.302 14.324 - 14.385: 75.7858% ( 1) 00:44:45.302 14.507 - 14.568: 75.7991% ( 1) 00:44:45.302 14.629 - 14.690: 75.8125% ( 1) 00:44:45.302 14.690 - 14.750: 75.8391% ( 2) 00:44:45.302 14.994 - 15.055: 75.8524% ( 1) 00:44:45.302 15.055 - 15.116: 75.8657% ( 1) 00:44:45.302 15.177 - 15.238: 75.8791% ( 1) 00:44:45.302 15.238 - 15.299: 75.9057% ( 2) 00:44:45.302 15.299 - 15.360: 75.9190% ( 1) 00:44:45.302 15.360 - 15.421: 75.9323% ( 1) 00:44:45.302 15.421 - 15.482: 75.9457% ( 1) 00:44:45.302 15.604 - 15.726: 75.9723% ( 2) 00:44:45.302 15.726 - 15.848: 75.9989% ( 2) 00:44:45.302 15.848 - 15.970: 76.0256% ( 2) 00:44:45.302 15.970 - 16.091: 76.0522% ( 2) 00:44:45.302 16.091 - 16.213: 76.0655% ( 1) 00:44:45.302 16.213 - 16.335: 76.0788% ( 1) 00:44:45.302 16.457 - 16.579: 76.0922% ( 1) 00:44:45.302 16.579 - 16.701: 76.1055% ( 1) 00:44:45.302 16.823 - 16.945: 76.1188% ( 1) 00:44:45.302 16.945 - 17.067: 76.1854% ( 5) 00:44:45.302 17.067 - 17.189: 76.2387% ( 4) 00:44:45.302 17.310 - 17.432: 76.2920% ( 4) 00:44:45.302 17.432 - 17.554: 76.3186% ( 2) 00:44:45.302 17.554 - 17.676: 76.3319% ( 1) 00:44:45.302 17.676 - 17.798: 76.3985% ( 5) 00:44:45.302 17.798 - 17.920: 77.4907% ( 82) 00:44:45.302 17.920 - 18.042: 79.8215% ( 175) 00:44:45.302 18.042 - 18.164: 81.9792% ( 162) 00:44:45.302 18.164 - 18.286: 83.4310% ( 109) 00:44:45.302 18.286 - 18.408: 84.3900% ( 72) 00:44:45.302 18.408 - 18.530: 85.0826% ( 52) 00:44:45.302 18.530 - 18.651: 86.9073% ( 137) 00:44:45.302 18.651 - 18.773: 90.5701% ( 275) 00:44:45.302 18.773 - 18.895: 93.7533% ( 239) 00:44:45.302 18.895 - 19.017: 95.6180% ( 140) 00:44:45.302 19.017 - 19.139: 96.9499% ( 100) 00:44:45.302 19.139 - 19.261: 97.6292% ( 51) 00:44:45.302 19.261 - 19.383: 98.0021% ( 28) 00:44:45.302 19.383 - 19.505: 98.3884% ( 29) 00:44:45.302 19.505 - 19.627: 98.6414% ( 19) 00:44:45.302 19.627 - 19.749: 98.9345% ( 22) 00:44:45.302 19.749 - 19.870: 99.1476% ( 16) 00:44:45.302 19.870 - 19.992: 99.2142% ( 5) 00:44:45.302 19.992 - 20.114: 99.2541% ( 3) 00:44:45.302 20.236 - 20.358: 99.2674% ( 1) 00:44:45.302 20.480 - 20.602: 99.2808% ( 1) 00:44:45.302 20.724 - 20.846: 99.2941% ( 1) 00:44:45.302 20.968 - 21.090: 99.3207% ( 2) 00:44:45.302 21.090 - 21.211: 99.3340% ( 1) 00:44:45.302 21.211 - 21.333: 99.3607% ( 2) 00:44:45.302 21.455 - 21.577: 99.3740% ( 1) 00:44:45.302 21.577 - 21.699: 99.3873% ( 1) 00:44:45.302 22.065 - 22.187: 99.4006% ( 1) 00:44:45.302 22.796 - 22.918: 99.4273% ( 2) 00:44:45.302 23.284 - 23.406: 99.4406% ( 1) 00:44:45.302 23.528 - 23.650: 99.4539% ( 1) 00:44:45.302 23.650 - 23.771: 99.4672% ( 1) 00:44:45.302 24.137 - 24.259: 99.4939% ( 2) 00:44:45.302 24.259 - 24.381: 99.5205% ( 2) 00:44:45.302 24.381 - 24.503: 99.5338% ( 1) 00:44:45.302 24.503 - 24.625: 99.5471% ( 1) 00:44:45.303 24.747 - 24.869: 99.5738% ( 2) 00:44:45.303 24.869 - 24.990: 99.6004% ( 2) 00:44:45.303 24.990 - 25.112: 99.6271% ( 2) 00:44:45.303 25.356 - 25.478: 99.6404% ( 1) 00:44:45.303 25.478 - 25.600: 99.6537% ( 1) 00:44:45.303 25.600 - 25.722: 99.6803% ( 2) 00:44:45.303 25.722 - 25.844: 99.7203% ( 3) 00:44:45.303 26.088 - 26.210: 99.7336% ( 1) 00:44:45.303 26.210 - 26.331: 99.7469% ( 1) 00:44:45.303 26.941 - 27.063: 99.7736% ( 2) 00:44:45.303 27.063 - 27.185: 99.8135% ( 3) 00:44:45.303 27.185 - 27.307: 99.8269% ( 1) 00:44:45.303 27.550 - 27.672: 99.8402% ( 1) 00:44:45.303 27.794 - 27.916: 99.8535% ( 1) 00:44:45.303 28.160 - 28.282: 99.8801% ( 2) 00:44:45.303 28.891 - 29.013: 99.8934% ( 1) 00:44:45.303 29.989 - 30.110: 99.9068% ( 1) 00:44:45.303 30.598 - 30.720: 99.9334% ( 2) 00:44:45.303 51.688 - 51.931: 99.9467% ( 1) 00:44:45.303 74.118 - 74.606: 99.9600% ( 1) 00:44:45.303 100.937 - 101.425: 99.9734% ( 1) 00:44:45.303 103.863 - 104.350: 99.9867% ( 1) 00:44:45.303 690.469 - 694.370: 100.0000% ( 1) 00:44:45.303 00:44:45.303 Complete histogram 00:44:45.303 ================== 00:44:45.303 Range in us Cumulative Count 00:44:45.303 7.710 - 7.741: 0.0400% ( 3) 00:44:45.303 7.741 - 7.771: 0.3596% ( 24) 00:44:45.303 7.771 - 7.802: 1.5184% ( 87) 00:44:45.303 7.802 - 7.863: 11.9872% ( 786) 00:44:45.303 7.863 - 7.924: 20.8711% ( 667) 00:44:45.303 7.924 - 7.985: 25.5594% ( 352) 00:44:45.303 7.985 - 8.046: 27.1311% ( 118) 00:44:45.303 8.046 - 8.107: 27.9036% ( 58) 00:44:45.303 8.107 - 8.168: 28.1300% ( 17) 00:44:45.303 8.168 - 8.229: 28.3564% ( 17) 00:44:45.303 8.229 - 8.290: 29.9014% ( 116) 00:44:45.303 8.290 - 8.350: 39.2515% ( 702) 00:44:45.303 8.350 - 8.411: 52.8636% ( 1022) 00:44:45.303 8.411 - 8.472: 60.6819% ( 587) 00:44:45.303 8.472 - 8.533: 67.4214% ( 506) 00:44:45.303 8.533 - 8.594: 70.9909% ( 268) 00:44:45.303 8.594 - 8.655: 72.8556% ( 140) 00:44:45.303 8.655 - 8.716: 73.6414% ( 59) 00:44:45.303 8.716 - 8.777: 74.0543% ( 31) 00:44:45.303 8.777 - 8.838: 74.4806% ( 32) 00:44:45.303 8.838 - 8.899: 74.6537% ( 13) 00:44:45.303 8.899 - 8.960: 74.7336% ( 6) 00:44:45.303 8.960 - 9.021: 74.8668% ( 10) 00:44:45.303 9.021 - 9.082: 74.9600% ( 7) 00:44:45.303 9.082 - 9.143: 75.0533% ( 7) 00:44:45.303 9.143 - 9.204: 75.1598% ( 8) 00:44:45.303 9.204 - 9.265: 75.4662% ( 23) 00:44:45.303 9.265 - 9.326: 75.5727% ( 8) 00:44:45.303 9.326 - 9.387: 75.6393% ( 5) 00:44:45.303 9.387 - 9.448: 75.7059% ( 5) 00:44:45.303 9.448 - 9.509: 75.8125% ( 8) 00:44:45.303 9.509 - 9.570: 75.8258% ( 1) 00:44:45.303 9.570 - 9.630: 75.8524% ( 2) 00:44:45.303 9.630 - 9.691: 75.8924% ( 3) 00:44:45.303 9.691 - 9.752: 75.9057% ( 1) 00:44:45.303 9.752 - 9.813: 75.9190% ( 1) 00:44:45.303 9.813 - 9.874: 75.9323% ( 1) 00:44:45.303 9.874 - 9.935: 75.9457% ( 1) 00:44:45.303 9.935 - 9.996: 75.9590% ( 1) 00:44:45.303 10.118 - 10.179: 75.9723% ( 1) 00:44:45.303 10.301 - 10.362: 75.9856% ( 1) 00:44:45.303 10.423 - 10.484: 75.9989% ( 1) 00:44:45.303 10.850 - 10.910: 76.0256% ( 2) 00:44:45.303 11.337 - 11.398: 76.0522% ( 2) 00:44:45.303 11.398 - 11.459: 76.0788% ( 2) 00:44:45.303 11.459 - 11.520: 76.0922% ( 1) 00:44:45.303 11.703 - 11.764: 76.1055% ( 1) 00:44:45.303 11.825 - 11.886: 76.1188% ( 1) 00:44:45.303 11.886 - 11.947: 76.1321% ( 1) 00:44:45.303 12.190 - 12.251: 76.4651% ( 25) 00:44:45.303 12.251 - 12.312: 78.2765% ( 136) 00:44:45.303 12.312 - 12.373: 82.6718% ( 330) 00:44:45.303 12.373 - 12.434: 86.2280% ( 267) 00:44:45.303 12.434 - 12.495: 88.0794% ( 139) 00:44:45.303 12.495 - 12.556: 88.7853% ( 53) 00:44:45.303 12.556 - 12.617: 89.1582% ( 28) 00:44:45.303 12.617 - 12.678: 89.5445% ( 29) 00:44:45.303 12.678 - 12.739: 90.6633% ( 84) 00:44:45.303 12.739 - 12.800: 93.3671% ( 203) 00:44:45.303 12.800 - 12.861: 95.4449% ( 156) 00:44:45.303 12.861 - 12.922: 96.7102% ( 95) 00:44:45.303 12.922 - 12.983: 97.6292% ( 69) 00:44:45.303 12.983 - 13.044: 97.9489% ( 24) 00:44:45.303 13.044 - 13.105: 98.2019% ( 19) 00:44:45.303 13.105 - 13.166: 98.3085% ( 8) 00:44:45.303 13.166 - 13.227: 98.4550% ( 11) 00:44:45.303 13.227 - 13.288: 98.5216% ( 5) 00:44:45.303 13.288 - 13.349: 98.6148% ( 7) 00:44:45.303 13.349 - 13.410: 98.6281% ( 1) 00:44:45.303 13.410 - 13.470: 98.6548% ( 2) 00:44:45.303 13.470 - 13.531: 98.6947% ( 3) 00:44:45.303 13.531 - 13.592: 98.7080% ( 1) 00:44:45.303 13.592 - 13.653: 98.7746% ( 5) 00:44:45.303 13.653 - 13.714: 98.8279% ( 4) 00:44:45.303 13.714 - 13.775: 98.9078% ( 6) 00:44:45.303 13.775 - 13.836: 98.9212% ( 1) 00:44:45.303 13.836 - 13.897: 99.0277% ( 8) 00:44:45.303 13.958 - 14.019: 99.0810% ( 4) 00:44:45.303 14.019 - 14.080: 99.1343% ( 4) 00:44:45.303 14.080 - 14.141: 99.1609% ( 2) 00:44:45.303 14.141 - 14.202: 99.1875% ( 2) 00:44:45.303 14.202 - 14.263: 99.2142% ( 2) 00:44:45.303 14.263 - 14.324: 99.2275% ( 1) 00:44:45.303 14.324 - 14.385: 99.2541% ( 2) 00:44:45.303 14.385 - 14.446: 99.2674% ( 1) 00:44:45.303 14.446 - 14.507: 99.2808% ( 1) 00:44:45.303 14.507 - 14.568: 99.2941% ( 1) 00:44:45.303 14.568 - 14.629: 99.3074% ( 1) 00:44:45.303 14.811 - 14.872: 99.3207% ( 1) 00:44:45.303 14.872 - 14.933: 99.3340% ( 1) 00:44:45.303 14.994 - 15.055: 99.3474% ( 1) 00:44:45.303 15.055 - 15.116: 99.3607% ( 1) 00:44:45.303 15.116 - 15.177: 99.3740% ( 1) 00:44:45.303 15.177 - 15.238: 99.3873% ( 1) 00:44:45.303 15.299 - 15.360: 99.4006% ( 1) 00:44:45.303 15.421 - 15.482: 99.4140% ( 1) 00:44:45.303 15.848 - 15.970: 99.4406% ( 2) 00:44:45.303 15.970 - 16.091: 99.4539% ( 1) 00:44:45.303 16.701 - 16.823: 99.4672% ( 1) 00:44:45.303 16.823 - 16.945: 99.4806% ( 1) 00:44:45.303 16.945 - 17.067: 99.4939% ( 1) 00:44:45.303 17.189 - 17.310: 99.5072% ( 1) 00:44:45.303 17.310 - 17.432: 99.5205% ( 1) 00:44:45.303 17.432 - 17.554: 99.5338% ( 1) 00:44:45.303 18.286 - 18.408: 99.5471% ( 1) 00:44:45.303 18.773 - 18.895: 99.5605% ( 1) 00:44:45.303 18.895 - 19.017: 99.5738% ( 1) 00:44:45.303 19.992 - 20.114: 99.5871% ( 1) 00:44:45.303 20.114 - 20.236: 99.6137% ( 2) 00:44:45.303 20.358 - 20.480: 99.6271% ( 1) 00:44:45.303 20.602 - 20.724: 99.6670% ( 3) 00:44:45.303 20.724 - 20.846: 99.6803% ( 1) 00:44:45.303 20.846 - 20.968: 99.7070% ( 2) 00:44:45.303 21.090 - 21.211: 99.7203% ( 1) 00:44:45.303 21.699 - 21.821: 99.7336% ( 1) 00:44:45.303 22.309 - 22.430: 99.7469% ( 1) 00:44:45.303 23.040 - 23.162: 99.7603% ( 1) 00:44:45.303 23.528 - 23.650: 99.7736% ( 1) 00:44:45.303 23.650 - 23.771: 99.7869% ( 1) 00:44:45.303 23.771 - 23.893: 99.8002% ( 1) 00:44:45.303 24.015 - 24.137: 99.8135% ( 1) 00:44:45.303 24.259 - 24.381: 99.8269% ( 1) 00:44:45.303 24.503 - 24.625: 99.8402% ( 1) 00:44:45.303 24.990 - 25.112: 99.8535% ( 1) 00:44:45.303 25.844 - 25.966: 99.8668% ( 1) 00:44:45.303 30.232 - 30.354: 99.8801% ( 1) 00:44:45.303 31.208 - 31.451: 99.9068% ( 2) 00:44:45.303 58.027 - 58.270: 99.9201% ( 1) 00:44:45.303 65.829 - 66.316: 99.9334% ( 1) 00:44:45.303 118.004 - 118.491: 99.9467% ( 1) 00:44:45.303 145.310 - 146.286: 99.9600% ( 1) 00:44:45.303 174.568 - 175.543: 99.9734% ( 1) 00:44:45.303 370.590 - 372.541: 99.9867% ( 1) 00:44:45.303 936.229 - 940.130: 100.0000% ( 1) 00:44:45.303 00:44:45.303 00:44:45.303 real 0m1.378s 00:44:45.303 user 0m1.153s 00:44:45.303 sys 0m0.160s 00:44:45.303 01:11:07 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:45.303 01:11:07 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:44:45.303 ************************************ 00:44:45.303 END TEST nvme_overhead 00:44:45.303 ************************************ 00:44:45.303 01:11:07 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:44:45.303 01:11:07 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:44:45.303 01:11:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:45.303 01:11:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:45.303 ************************************ 00:44:45.303 START TEST nvme_arbitration 00:44:45.303 ************************************ 00:44:45.303 01:11:07 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:44:49.498 Initializing NVMe Controllers 00:44:49.498 Attached to 0000:00:10.0 00:44:49.498 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:44:49.498 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:44:49.498 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:44:49.498 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:44:49.498 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:44:49.498 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:44:49.498 Initialization complete. Launching workers. 00:44:49.498 Starting thread on core 1 with urgent priority queue 00:44:49.498 Starting thread on core 2 with urgent priority queue 00:44:49.498 Starting thread on core 3 with urgent priority queue 00:44:49.498 Starting thread on core 0 with urgent priority queue 00:44:49.498 QEMU NVMe Ctrl (12340 ) core 0: 1152.00 IO/s 86.81 secs/100000 ios 00:44:49.498 QEMU NVMe Ctrl (12340 ) core 1: 1749.33 IO/s 57.16 secs/100000 ios 00:44:49.498 QEMU NVMe Ctrl (12340 ) core 2: 384.00 IO/s 260.42 secs/100000 ios 00:44:49.498 QEMU NVMe Ctrl (12340 ) core 3: 405.33 IO/s 246.71 secs/100000 ios 00:44:49.498 ======================================================== 00:44:49.498 00:44:49.498 00:44:49.498 real 0m3.432s 00:44:49.498 user 0m9.377s 00:44:49.498 sys 0m0.124s 00:44:49.498 01:11:11 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:49.498 01:11:11 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:44:49.498 ************************************ 00:44:49.498 END TEST nvme_arbitration 00:44:49.498 ************************************ 00:44:49.498 01:11:11 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:44:49.498 01:11:11 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:44:49.498 01:11:11 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:49.498 01:11:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:49.498 ************************************ 00:44:49.498 START TEST nvme_single_aen 00:44:49.498 ************************************ 00:44:49.498 01:11:11 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:44:49.498 Asynchronous Event Request test 00:44:49.498 Attached to 0000:00:10.0 00:44:49.498 Reset controller to setup AER completions for this process 00:44:49.498 Registering asynchronous event callbacks... 00:44:49.498 Getting orig temperature thresholds of all controllers 00:44:49.498 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:44:49.498 Setting all controllers temperature threshold low to trigger AER 00:44:49.498 Waiting for all controllers temperature threshold to be set lower 00:44:49.498 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:44:49.498 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:44:49.498 Waiting for all controllers to trigger AER and reset threshold 00:44:49.498 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:44:49.498 Cleaning up... 00:44:49.498 00:44:49.498 real 0m0.329s 00:44:49.498 user 0m0.097s 00:44:49.498 sys 0m0.174s 00:44:49.498 01:11:11 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:49.498 01:11:11 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:44:49.498 ************************************ 00:44:49.498 END TEST nvme_single_aen 00:44:49.498 ************************************ 00:44:49.498 01:11:11 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:44:49.498 01:11:11 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:49.498 01:11:11 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:49.498 01:11:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:49.498 ************************************ 00:44:49.498 START TEST nvme_doorbell_aers 00:44:49.498 ************************************ 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # bdfs=() 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # local bdfs 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:44:49.498 01:11:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:44:49.498 [2024-07-25 01:11:12.121969] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 171539) is not found. Dropping the request. 00:44:59.473 Executing: test_write_invalid_db 00:44:59.473 Waiting for AER completion... 00:44:59.473 Failure: test_write_invalid_db 00:44:59.473 00:44:59.473 Executing: test_invalid_db_write_overflow_sq 00:44:59.473 Waiting for AER completion... 00:44:59.473 Failure: test_invalid_db_write_overflow_sq 00:44:59.473 00:44:59.473 Executing: test_invalid_db_write_overflow_cq 00:44:59.473 Waiting for AER completion... 00:44:59.473 Failure: test_invalid_db_write_overflow_cq 00:44:59.473 00:44:59.473 00:44:59.473 real 0m10.116s 00:44:59.473 user 0m7.455s 00:44:59.473 sys 0m2.602s 00:44:59.473 01:11:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:59.473 01:11:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:44:59.473 ************************************ 00:44:59.473 END TEST nvme_doorbell_aers 00:44:59.473 ************************************ 00:44:59.473 01:11:21 nvme -- nvme/nvme.sh@97 -- # uname 00:44:59.473 01:11:21 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:44:59.473 01:11:21 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:44:59.473 01:11:21 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:44:59.473 01:11:21 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:59.473 01:11:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:59.473 ************************************ 00:44:59.473 START TEST nvme_multi_aen 00:44:59.473 ************************************ 00:44:59.473 01:11:21 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:44:59.731 [2024-07-25 01:11:22.234010] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 171539) is not found. Dropping the request. 00:44:59.731 [2024-07-25 01:11:22.234139] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 171539) is not found. Dropping the request. 00:44:59.731 [2024-07-25 01:11:22.234182] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 171539) is not found. Dropping the request. 00:44:59.731 Child process pid: 171728 00:44:59.989 [Child] Asynchronous Event Request test 00:44:59.989 [Child] Attached to 0000:00:10.0 00:44:59.989 [Child] Registering asynchronous event callbacks... 00:44:59.989 [Child] Getting orig temperature thresholds of all controllers 00:44:59.989 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:44:59.989 [Child] Waiting for all controllers to trigger AER and reset threshold 00:44:59.989 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:44:59.989 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:44:59.989 [Child] Cleaning up... 00:44:59.989 Asynchronous Event Request test 00:44:59.989 Attached to 0000:00:10.0 00:44:59.989 Reset controller to setup AER completions for this process 00:44:59.989 Registering asynchronous event callbacks... 00:44:59.989 Getting orig temperature thresholds of all controllers 00:44:59.989 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:44:59.989 Setting all controllers temperature threshold low to trigger AER 00:44:59.989 Waiting for all controllers temperature threshold to be set lower 00:44:59.989 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:44:59.989 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:44:59.989 Waiting for all controllers to trigger AER and reset threshold 00:44:59.989 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:44:59.989 Cleaning up... 00:44:59.989 00:44:59.989 real 0m0.662s 00:44:59.989 user 0m0.249s 00:44:59.989 sys 0m0.255s 00:44:59.989 01:11:22 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:59.989 01:11:22 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:44:59.989 ************************************ 00:44:59.989 END TEST nvme_multi_aen 00:44:59.989 ************************************ 00:45:00.247 01:11:22 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:45:00.247 01:11:22 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:45:00.247 01:11:22 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:00.247 01:11:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:00.247 ************************************ 00:45:00.247 START TEST nvme_startup 00:45:00.247 ************************************ 00:45:00.247 01:11:22 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:45:00.504 Initializing NVMe Controllers 00:45:00.504 Attached to 0000:00:10.0 00:45:00.504 Initialization complete. 00:45:00.504 Time used:235675.688 (us). 00:45:00.504 00:45:00.504 real 0m0.352s 00:45:00.504 user 0m0.136s 00:45:00.504 sys 0m0.138s 00:45:00.504 01:11:23 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:00.504 01:11:23 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:45:00.504 ************************************ 00:45:00.504 END TEST nvme_startup 00:45:00.504 ************************************ 00:45:00.504 01:11:23 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:45:00.504 01:11:23 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:00.504 01:11:23 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:00.504 01:11:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:00.504 ************************************ 00:45:00.504 START TEST nvme_multi_secondary 00:45:00.504 ************************************ 00:45:00.504 01:11:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:45:00.504 01:11:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=171794 00:45:00.504 01:11:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:45:00.504 01:11:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:45:00.504 01:11:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=171795 00:45:00.504 01:11:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:45:03.787 Initializing NVMe Controllers 00:45:03.787 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:03.787 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:45:03.787 Initialization complete. Launching workers. 00:45:03.787 ======================================================== 00:45:03.787 Latency(us) 00:45:03.787 Device Information : IOPS MiB/s Average min max 00:45:03.787 PCIE (0000:00:10.0) NSID 1 from core 1: 34586.67 135.10 462.32 171.23 16766.12 00:45:03.787 ======================================================== 00:45:03.788 Total : 34586.67 135.10 462.32 171.23 16766.12 00:45:03.788 00:45:04.046 Initializing NVMe Controllers 00:45:04.046 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:04.046 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:45:04.046 Initialization complete. Launching workers. 00:45:04.046 ======================================================== 00:45:04.046 Latency(us) 00:45:04.046 Device Information : IOPS MiB/s Average min max 00:45:04.046 PCIE (0000:00:10.0) NSID 1 from core 2: 15140.77 59.14 1055.80 171.15 17489.51 00:45:04.046 ======================================================== 00:45:04.047 Total : 15140.77 59.14 1055.80 171.15 17489.51 00:45:04.047 00:45:04.047 01:11:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 171794 00:45:06.579 Initializing NVMe Controllers 00:45:06.579 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:06.579 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:45:06.579 Initialization complete. Launching workers. 00:45:06.579 ======================================================== 00:45:06.579 Latency(us) 00:45:06.579 Device Information : IOPS MiB/s Average min max 00:45:06.579 PCIE (0000:00:10.0) NSID 1 from core 0: 44147.20 172.45 362.12 159.80 3251.86 00:45:06.579 ======================================================== 00:45:06.579 Total : 44147.20 172.45 362.12 159.80 3251.86 00:45:06.579 00:45:06.579 01:11:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 171795 00:45:06.579 01:11:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=171867 00:45:06.579 01:11:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:45:06.579 01:11:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=171868 00:45:06.579 01:11:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:45:06.579 01:11:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:45:09.864 Initializing NVMe Controllers 00:45:09.864 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:09.864 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:45:09.865 Initialization complete. Launching workers. 00:45:09.865 ======================================================== 00:45:09.865 Latency(us) 00:45:09.865 Device Information : IOPS MiB/s Average min max 00:45:09.865 PCIE (0000:00:10.0) NSID 1 from core 1: 35641.91 139.23 448.61 169.95 1184.74 00:45:09.865 ======================================================== 00:45:09.865 Total : 35641.91 139.23 448.61 169.95 1184.74 00:45:09.865 00:45:09.865 Initializing NVMe Controllers 00:45:09.865 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:09.865 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:45:09.865 Initialization complete. Launching workers. 00:45:09.865 ======================================================== 00:45:09.865 Latency(us) 00:45:09.865 Device Information : IOPS MiB/s Average min max 00:45:09.865 PCIE (0000:00:10.0) NSID 1 from core 0: 35743.46 139.62 447.36 164.34 5566.09 00:45:09.865 ======================================================== 00:45:09.865 Total : 35743.46 139.62 447.36 164.34 5566.09 00:45:09.865 00:45:11.765 Initializing NVMe Controllers 00:45:11.766 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:11.766 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:45:11.766 Initialization complete. Launching workers. 00:45:11.766 ======================================================== 00:45:11.766 Latency(us) 00:45:11.766 Device Information : IOPS MiB/s Average min max 00:45:11.766 PCIE (0000:00:10.0) NSID 1 from core 2: 18547.40 72.45 861.89 138.80 20602.44 00:45:11.766 ======================================================== 00:45:11.766 Total : 18547.40 72.45 861.89 138.80 20602.44 00:45:11.766 00:45:12.024 ************************************ 00:45:12.024 END TEST nvme_multi_secondary 00:45:12.024 ************************************ 00:45:12.024 01:11:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 171867 00:45:12.024 01:11:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 171868 00:45:12.024 00:45:12.024 real 0m11.316s 00:45:12.024 user 0m18.640s 00:45:12.024 sys 0m0.958s 00:45:12.024 01:11:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:12.024 01:11:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:45:12.024 01:11:34 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:45:12.024 01:11:34 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:45:12.024 01:11:34 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/171096 ]] 00:45:12.024 01:11:34 nvme -- common/autotest_common.sh@1088 -- # kill 171096 00:45:12.024 01:11:34 nvme -- common/autotest_common.sh@1089 -- # wait 171096 00:45:12.024 [2024-07-25 01:11:34.474064] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 171727) is not found. Dropping the request. 00:45:12.024 [2024-07-25 01:11:34.474364] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 171727) is not found. Dropping the request. 00:45:12.024 [2024-07-25 01:11:34.474449] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 171727) is not found. Dropping the request. 00:45:12.024 [2024-07-25 01:11:34.474643] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 171727) is not found. Dropping the request. 00:45:12.282 [2024-07-25 01:11:34.750207] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:45:12.282 01:11:34 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:45:12.282 01:11:34 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:45:12.282 01:11:34 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:45:12.282 01:11:34 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:12.282 01:11:34 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:12.282 01:11:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:12.282 ************************************ 00:45:12.282 START TEST bdev_nvme_reset_stuck_adm_cmd 00:45:12.282 ************************************ 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:45:12.282 * Looking for test storage... 00:45:12.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1522 -- # bdfs=() 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1522 -- # local bdfs 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # bdfs=() 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # local bdfs 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:45:12.282 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:45:12.540 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:45:12.540 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # echo 0000:00:10.0 00:45:12.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=172025 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 172025 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 172025 ']' 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:12.541 01:11:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:12.541 [2024-07-25 01:11:35.018993] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:45:12.541 [2024-07-25 01:11:35.019336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172025 ] 00:45:12.799 [2024-07-25 01:11:35.217257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:13.057 [2024-07-25 01:11:35.467838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:13.057 [2024-07-25 01:11:35.467983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:45:13.057 [2024-07-25 01:11:35.468181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:13.058 [2024-07-25 01:11:35.468184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:45:13.625 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:13.625 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:45:13.625 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:45:13.625 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:13.625 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:13.884 nvme0n1 00:45:13.884 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:13.884 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:45:13.884 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_Ox4Jx.txt 00:45:13.884 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:45:13.884 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:13.884 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:13.884 true 00:45:13.884 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:13.885 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:45:13.885 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721869896 00:45:13.885 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=172048 00:45:13.885 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:45:13.885 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:45:13.885 01:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:15.789 [2024-07-25 01:11:38.360347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:45:15.789 [2024-07-25 01:11:38.360865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:45:15.789 [2024-07-25 01:11:38.361026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:45:15.789 [2024-07-25 01:11:38.361143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:15.789 [2024-07-25 01:11:38.363088] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:45:15.789 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 172048 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 172048 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 172048 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:15.789 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:15.790 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:45:15.790 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_Ox4Jx.txt 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_Ox4Jx.txt 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 172025 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 172025 ']' 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 172025 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172025 00:45:16.049 killing process with pid 172025 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172025' 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 172025 00:45:16.049 01:11:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 172025 00:45:18.637 01:11:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:45:18.637 01:11:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:45:18.637 00:45:18.637 real 0m6.193s 00:45:18.637 user 0m21.518s 00:45:18.637 sys 0m0.656s 00:45:18.637 ************************************ 00:45:18.637 END TEST bdev_nvme_reset_stuck_adm_cmd 00:45:18.637 ************************************ 00:45:18.637 01:11:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:18.637 01:11:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:18.637 01:11:41 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:45:18.637 01:11:41 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:45:18.637 01:11:41 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:18.637 01:11:41 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:18.637 01:11:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:18.637 ************************************ 00:45:18.637 START TEST nvme_fio 00:45:18.637 ************************************ 00:45:18.637 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:45:18.637 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:45:18.637 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:45:18.637 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:45:18.637 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # bdfs=() 00:45:18.637 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # local bdfs 00:45:18.637 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:18.637 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:45:18.637 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:45:18.637 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:45:18.637 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 00:45:18.637 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:45:18.637 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:45:18.637 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:45:18.637 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:45:18.637 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:45:18.896 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:45:18.896 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:45:19.155 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:45:19.155 01:11:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local sanitizers 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # shift 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local asan_lib= 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # grep libasan 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # break 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:45:19.155 01:11:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:45:19.414 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:45:19.414 fio-3.35 00:45:19.414 Starting 1 thread 00:45:22.699 00:45:22.699 test: (groupid=0, jobs=1): err= 0: pid=172195: Thu Jul 25 01:11:44 2024 00:45:22.699 read: IOPS=19.2k, BW=74.9MiB/s (78.6MB/s)(150MiB/2001msec) 00:45:22.699 slat (nsec): min=3901, max=49138, avg=5250.45, stdev=1801.72 00:45:22.699 clat (usec): min=223, max=12820, avg=3318.47, stdev=500.38 00:45:22.699 lat (usec): min=228, max=12865, avg=3323.72, stdev=500.96 00:45:22.699 clat percentiles (usec): 00:45:22.699 | 1.00th=[ 2868], 5.00th=[ 2933], 10.00th=[ 2999], 20.00th=[ 3032], 00:45:22.699 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 00:45:22.699 | 70.00th=[ 3261], 80.00th=[ 3752], 90.00th=[ 3949], 95.00th=[ 4047], 00:45:22.699 | 99.00th=[ 4359], 99.50th=[ 5669], 99.90th=[ 8356], 99.95th=[10028], 00:45:22.699 | 99.99th=[12518] 00:45:22.699 bw ( KiB/s): min=72168, max=81432, per=100.00%, avg=77173.33, stdev=4676.92, samples=3 00:45:22.699 iops : min=18042, max=20358, avg=19293.33, stdev=1169.23, samples=3 00:45:22.699 write: IOPS=19.2k, BW=74.8MiB/s (78.5MB/s)(150MiB/2001msec); 0 zone resets 00:45:22.699 slat (nsec): min=3984, max=61965, avg=5451.67, stdev=1849.68 00:45:22.699 clat (usec): min=242, max=12599, avg=3332.76, stdev=508.49 00:45:22.699 lat (usec): min=247, max=12615, avg=3338.21, stdev=509.04 00:45:22.699 clat percentiles (usec): 00:45:22.699 | 1.00th=[ 2868], 5.00th=[ 2966], 10.00th=[ 2999], 20.00th=[ 3064], 00:45:22.699 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3228], 00:45:22.699 | 70.00th=[ 3294], 80.00th=[ 3785], 90.00th=[ 3982], 95.00th=[ 4047], 00:45:22.699 | 99.00th=[ 4293], 99.50th=[ 5997], 99.90th=[ 8455], 99.95th=[10421], 00:45:22.699 | 99.99th=[12256] 00:45:22.699 bw ( KiB/s): min=72096, max=81576, per=100.00%, avg=77290.67, stdev=4804.97, samples=3 00:45:22.699 iops : min=18024, max=20394, avg=19322.67, stdev=1201.24, samples=3 00:45:22.699 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:45:22.699 lat (msec) : 2=0.05%, 4=91.97%, 10=7.89%, 20=0.05% 00:45:22.699 cpu : usr=99.90%, sys=0.00%, ctx=18, majf=0, minf=36 00:45:22.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:45:22.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:22.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:22.700 issued rwts: total=38374,38336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:22.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:22.700 00:45:22.700 Run status group 0 (all jobs): 00:45:22.700 READ: bw=74.9MiB/s (78.6MB/s), 74.9MiB/s-74.9MiB/s (78.6MB/s-78.6MB/s), io=150MiB (157MB), run=2001-2001msec 00:45:22.700 WRITE: bw=74.8MiB/s (78.5MB/s), 74.8MiB/s-74.8MiB/s (78.5MB/s-78.5MB/s), io=150MiB (157MB), run=2001-2001msec 00:45:22.700 ----------------------------------------------------- 00:45:22.700 Suppressions used: 00:45:22.700 count bytes template 00:45:22.700 1 32 /usr/src/fio/parse.c 00:45:22.700 ----------------------------------------------------- 00:45:22.700 00:45:22.700 01:11:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:45:22.700 01:11:45 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:45:22.700 00:45:22.700 real 0m4.169s 00:45:22.700 user 0m3.364s 00:45:22.700 sys 0m0.476s 00:45:22.700 01:11:45 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:22.700 ************************************ 00:45:22.700 END TEST nvme_fio 00:45:22.700 ************************************ 00:45:22.700 01:11:45 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:45:22.700 00:45:22.700 real 0m48.863s 00:45:22.700 user 2m9.618s 00:45:22.700 sys 0m10.176s 00:45:22.700 01:11:45 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:22.700 01:11:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:22.700 ************************************ 00:45:22.700 END TEST nvme 00:45:22.700 ************************************ 00:45:22.700 01:11:45 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:45:22.700 01:11:45 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:45:22.700 01:11:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:22.700 01:11:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:22.700 01:11:45 -- common/autotest_common.sh@10 -- # set +x 00:45:22.700 ************************************ 00:45:22.700 START TEST nvme_scc 00:45:22.700 ************************************ 00:45:22.700 01:11:45 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:45:22.959 * Looking for test storage... 00:45:22.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:45:22.959 01:11:45 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:22.959 01:11:45 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:22.959 01:11:45 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:22.959 01:11:45 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:22.959 01:11:45 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:22.959 01:11:45 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:22.959 01:11:45 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:22.959 01:11:45 nvme_scc -- paths/export.sh@5 -- # export PATH 00:45:22.959 01:11:45 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:45:22.959 01:11:45 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:45:22.959 01:11:45 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:22.959 01:11:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:45:22.959 01:11:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:45:22.959 01:11:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:45:22.959 01:11:45 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:45:23.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:23.218 Waiting for block devices as requested 00:45:23.477 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:45:23.477 01:11:46 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:45:23.477 01:11:46 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:45:23.477 01:11:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:45:23.477 01:11:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:45:23.477 01:11:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:45:23.477 01:11:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:45:23.477 01:11:46 nvme_scc -- scripts/common.sh@15 -- # local i 00:45:23.477 01:11:46 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:45:23.477 01:11:46 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:45:23.477 01:11:46 nvme_scc -- scripts/common.sh@24 -- # return 0 00:45:23.477 01:11:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:45:23.477 01:11:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:45:23.477 01:11:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.478 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.479 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.480 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:45:23.740 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.741 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.742 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:45:23.743 01:11:46 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@206 -- # echo nvme0 00:45:23.743 01:11:46 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:45:23.743 01:11:46 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:45:23.743 01:11:46 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:45:23.743 01:11:46 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:45:24.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:24.310 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:45:25.246 01:11:47 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:45:25.246 01:11:47 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:45:25.246 01:11:47 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:25.246 01:11:47 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:45:25.246 ************************************ 00:45:25.246 START TEST nvme_simple_copy 00:45:25.246 ************************************ 00:45:25.246 01:11:47 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:45:25.505 Initializing NVMe Controllers 00:45:25.505 Attaching to 0000:00:10.0 00:45:25.505 Controller supports SCC. Attached to 0000:00:10.0 00:45:25.505 Namespace ID: 1 size: 5GB 00:45:25.505 Initialization complete. 00:45:25.505 00:45:25.505 Controller QEMU NVMe Ctrl (12340 ) 00:45:25.505 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:45:25.505 Namespace Block Size:4096 00:45:25.505 Writing LBAs 0 to 63 with Random Data 00:45:25.505 Copied LBAs from 0 - 63 to the Destination LBA 256 00:45:25.505 LBAs matching Written Data: 64 00:45:25.505 00:45:25.505 real 0m0.354s 00:45:25.505 user 0m0.137s 00:45:25.505 sys 0m0.119s 00:45:25.505 01:11:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:25.505 01:11:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:45:25.505 ************************************ 00:45:25.505 END TEST nvme_simple_copy 00:45:25.505 ************************************ 00:45:25.765 00:45:25.765 real 0m2.868s 00:45:25.765 user 0m0.835s 00:45:25.765 sys 0m1.893s 00:45:25.765 01:11:48 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:25.765 01:11:48 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:45:25.765 ************************************ 00:45:25.765 END TEST nvme_scc 00:45:25.765 ************************************ 00:45:25.765 01:11:48 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:45:25.765 01:11:48 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:45:25.765 01:11:48 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:45:25.765 01:11:48 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:45:25.765 01:11:48 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:45:25.765 01:11:48 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:45:25.765 01:11:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:25.765 01:11:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:25.765 01:11:48 -- common/autotest_common.sh@10 -- # set +x 00:45:25.765 ************************************ 00:45:25.765 START TEST nvme_rpc 00:45:25.765 ************************************ 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:45:25.765 * Looking for test storage... 00:45:25.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:45:25.765 01:11:48 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:25.765 01:11:48 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1522 -- # bdfs=() 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1522 -- # local bdfs 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1511 -- # bdfs=() 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1511 -- # local bdfs 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@1525 -- # echo 0000:00:10.0 00:45:25.765 01:11:48 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:45:25.765 01:11:48 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=172692 00:45:25.765 01:11:48 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:45:25.765 01:11:48 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:45:25.765 01:11:48 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 172692 00:45:25.765 01:11:48 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 172692 ']' 00:45:26.024 01:11:48 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:26.025 01:11:48 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:26.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:26.025 01:11:48 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:26.025 01:11:48 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:26.025 01:11:48 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:45:26.025 [2024-07-25 01:11:48.502934] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:45:26.025 [2024-07-25 01:11:48.503145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172692 ] 00:45:26.284 [2024-07-25 01:11:48.692960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:26.284 [2024-07-25 01:11:48.897215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:26.284 [2024-07-25 01:11:48.897269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:27.247 01:11:49 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:27.247 01:11:49 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:45:27.247 01:11:49 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:45:27.506 Nvme0n1 00:45:27.506 01:11:50 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:45:27.506 01:11:50 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:45:27.765 request: 00:45:27.765 { 00:45:27.765 "bdev_name": "Nvme0n1", 00:45:27.765 "filename": "non_existing_file", 00:45:27.765 "method": "bdev_nvme_apply_firmware", 00:45:27.765 "req_id": 1 00:45:27.765 } 00:45:27.765 Got JSON-RPC error response 00:45:27.765 response: 00:45:27.765 { 00:45:27.765 "code": -32603, 00:45:27.765 "message": "open file failed." 00:45:27.765 } 00:45:27.765 01:11:50 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:45:27.765 01:11:50 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:45:27.765 01:11:50 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:45:28.024 01:11:50 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:45:28.024 01:11:50 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 172692 00:45:28.024 01:11:50 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 172692 ']' 00:45:28.024 01:11:50 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 172692 00:45:28.024 01:11:50 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:45:28.024 01:11:50 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:28.024 01:11:50 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172692 00:45:28.024 01:11:50 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:28.024 01:11:50 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:28.024 01:11:50 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172692' 00:45:28.024 killing process with pid 172692 00:45:28.024 01:11:50 nvme_rpc -- common/autotest_common.sh@967 -- # kill 172692 00:45:28.024 01:11:50 nvme_rpc -- common/autotest_common.sh@972 -- # wait 172692 00:45:30.555 00:45:30.555 real 0m4.541s 00:45:30.555 user 0m8.517s 00:45:30.555 sys 0m0.624s 00:45:30.555 01:11:52 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:30.555 01:11:52 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:45:30.555 ************************************ 00:45:30.555 END TEST nvme_rpc 00:45:30.555 ************************************ 00:45:30.555 01:11:52 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:45:30.555 01:11:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:30.555 01:11:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:30.555 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:45:30.555 ************************************ 00:45:30.555 START TEST nvme_rpc_timeouts 00:45:30.555 ************************************ 00:45:30.555 01:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:45:30.555 * Looking for test storage... 00:45:30.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:45:30.555 01:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:30.555 01:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_172771 00:45:30.555 01:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_172771 00:45:30.555 01:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=172797 00:45:30.555 01:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:45:30.555 01:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:45:30.555 01:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 172797 00:45:30.555 01:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 172797 ']' 00:45:30.555 01:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:30.555 01:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:30.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:30.555 01:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:30.555 01:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:30.555 01:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:45:30.555 [2024-07-25 01:11:53.060284] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:45:30.555 [2024-07-25 01:11:53.060516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172797 ] 00:45:30.814 [2024-07-25 01:11:53.243260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:30.814 [2024-07-25 01:11:53.424226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:30.814 [2024-07-25 01:11:53.424230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:31.749 01:11:54 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:31.749 01:11:54 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:45:31.749 01:11:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:45:31.749 Checking default timeout settings: 00:45:31.749 01:11:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:45:32.008 01:11:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:45:32.008 Making settings changes with rpc: 00:45:32.008 01:11:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:45:32.267 01:11:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:45:32.267 Check default vs. modified settings: 00:45:32.267 01:11:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:45:32.526 01:11:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:45:32.526 01:11:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:45:32.526 01:11:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:45:32.526 01:11:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_172771 00:45:32.526 01:11:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_172771 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:45:32.526 Setting action_on_timeout is changed as expected. 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_172771 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_172771 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:45:32.526 Setting timeout_us is changed as expected. 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_172771 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_172771 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:45:32.526 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:45:32.526 Setting timeout_admin_us is changed as expected. 00:45:32.527 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:45:32.527 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_172771 /tmp/settings_modified_172771 00:45:32.527 01:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 172797 00:45:32.527 01:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 172797 ']' 00:45:32.527 01:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 172797 00:45:32.527 01:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:45:32.527 01:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:32.527 01:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172797 00:45:32.527 01:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:32.527 01:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:32.527 01:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172797' 00:45:32.527 killing process with pid 172797 00:45:32.527 01:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 172797 00:45:32.527 01:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 172797 00:45:35.062 RPC TIMEOUT SETTING TEST PASSED. 00:45:35.062 01:11:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:45:35.062 00:45:35.062 real 0m4.661s 00:45:35.062 user 0m8.740s 00:45:35.062 sys 0m0.753s 00:45:35.062 01:11:57 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:35.062 01:11:57 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:45:35.062 ************************************ 00:45:35.062 END TEST nvme_rpc_timeouts 00:45:35.062 ************************************ 00:45:35.062 01:11:57 -- spdk/autotest.sh@243 -- # uname -s 00:45:35.062 01:11:57 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:45:35.062 01:11:57 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:45:35.062 01:11:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:35.062 01:11:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:35.062 01:11:57 -- common/autotest_common.sh@10 -- # set +x 00:45:35.062 ************************************ 00:45:35.062 START TEST sw_hotplug 00:45:35.062 ************************************ 00:45:35.062 01:11:57 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:45:35.062 * Looking for test storage... 00:45:35.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:45:35.062 01:11:57 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:45:35.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:35.630 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:45:36.577 01:11:58 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:45:36.577 01:11:58 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:45:36.577 01:11:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:45:36.577 01:11:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@230 -- # local class 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@15 -- # local i 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@325 -- # (( 1 )) 00:45:36.577 01:11:59 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:45:36.577 01:11:59 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1 00:45:36.577 01:11:59 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:45:36.577 01:11:59 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:45:36.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:36.850 Waiting for block devices as requested 00:45:37.108 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:45:37.108 01:11:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0 00:45:37.108 01:11:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:45:37.366 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:45:37.625 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:37.625 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:45:38.560 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:45:38.560 01:12:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=173379 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:45:38.819 01:12:01 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:45:38.819 01:12:01 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:45:38.819 01:12:01 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:45:38.819 01:12:01 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:45:38.819 01:12:01 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:45:38.819 01:12:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:45:39.077 Initializing NVMe Controllers 00:45:39.077 Attaching to 0000:00:10.0 00:45:39.077 Attached to 0000:00:10.0 00:45:39.077 Initialization complete. Starting I/O... 00:45:39.077 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:45:39.077 00:45:40.011 QEMU NVMe Ctrl (12340 ): 2152 I/Os completed (+2152) 00:45:40.011 00:45:40.945 QEMU NVMe Ctrl (12340 ): 5097 I/Os completed (+2945) 00:45:40.945 00:45:41.882 QEMU NVMe Ctrl (12340 ): 8261 I/Os completed (+3164) 00:45:41.882 00:45:43.261 QEMU NVMe Ctrl (12340 ): 11433 I/Os completed (+3172) 00:45:43.261 00:45:44.198 QEMU NVMe Ctrl (12340 ): 14602 I/Os completed (+3169) 00:45:44.198 00:45:44.764 01:12:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:44.764 01:12:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:44.764 01:12:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:44.764 [2024-07-25 01:12:07.289678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:44.764 Controller removed: QEMU NVMe Ctrl (12340 ) 00:45:44.764 [2024-07-25 01:12:07.290974] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.764 [2024-07-25 01:12:07.291045] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.764 [2024-07-25 01:12:07.291066] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.764 [2024-07-25 01:12:07.291085] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.764 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:45:44.764 [2024-07-25 01:12:07.297063] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.764 [2024-07-25 01:12:07.297103] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.764 [2024-07-25 01:12:07.297131] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.764 [2024-07-25 01:12:07.297151] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.764 01:12:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:45:44.764 01:12:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:44.764 01:12:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:44.764 01:12:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:44.764 01:12:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:45.023 01:12:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:45.023 00:45:45.023 01:12:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:45.023 01:12:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:45.023 Attaching to 0000:00:10.0 00:45:45.023 Attached to 0000:00:10.0 00:45:45.967 QEMU NVMe Ctrl (12340 ): 2976 I/Os completed (+2976) 00:45:45.967 00:45:46.903 QEMU NVMe Ctrl (12340 ): 6124 I/Os completed (+3148) 00:45:46.903 00:45:48.280 QEMU NVMe Ctrl (12340 ): 9266 I/Os completed (+3142) 00:45:48.280 00:45:48.847 QEMU NVMe Ctrl (12340 ): 12201 I/Os completed (+2935) 00:45:48.847 00:45:50.221 QEMU NVMe Ctrl (12340 ): 15321 I/Os completed (+3120) 00:45:50.221 00:45:51.155 QEMU NVMe Ctrl (12340 ): 18478 I/Os completed (+3157) 00:45:51.155 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:51.155 [2024-07-25 01:12:13.548465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:51.155 Controller removed: QEMU NVMe Ctrl (12340 ) 00:45:51.155 [2024-07-25 01:12:13.549746] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.155 [2024-07-25 01:12:13.549789] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.155 [2024-07-25 01:12:13.549808] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.155 [2024-07-25 01:12:13.549826] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.155 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:45:51.155 [2024-07-25 01:12:13.555685] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.155 [2024-07-25 01:12:13.555854] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.155 [2024-07-25 01:12:13.555904] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.155 [2024-07-25 01:12:13.555993] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:51.155 01:12:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:51.155 Attaching to 0000:00:10.0 00:45:51.155 Attached to 0000:00:10.0 00:45:52.089 QEMU NVMe Ctrl (12340 ): 2154 I/Os completed (+2154) 00:45:52.089 00:45:53.024 QEMU NVMe Ctrl (12340 ): 5310 I/Os completed (+3156) 00:45:53.024 00:45:53.959 QEMU NVMe Ctrl (12340 ): 8459 I/Os completed (+3149) 00:45:53.959 00:45:54.947 QEMU NVMe Ctrl (12340 ): 11548 I/Os completed (+3089) 00:45:54.947 00:45:55.883 QEMU NVMe Ctrl (12340 ): 14660 I/Os completed (+3112) 00:45:55.883 00:45:57.262 QEMU NVMe Ctrl (12340 ): 17789 I/Os completed (+3129) 00:45:57.262 00:45:57.262 01:12:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:45:57.262 01:12:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:57.262 01:12:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:57.262 01:12:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:57.262 [2024-07-25 01:12:19.807410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:57.262 Controller removed: QEMU NVMe Ctrl (12340 ) 00:45:57.262 [2024-07-25 01:12:19.808741] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:57.262 [2024-07-25 01:12:19.808897] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:57.262 [2024-07-25 01:12:19.808949] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:57.262 [2024-07-25 01:12:19.809074] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:57.262 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:45:57.262 [2024-07-25 01:12:19.814715] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:57.262 [2024-07-25 01:12:19.814845] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:57.262 [2024-07-25 01:12:19.814891] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:57.262 [2024-07-25 01:12:19.814996] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:57.262 01:12:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:45:57.262 01:12:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:57.262 01:12:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:57.262 01:12:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:57.262 01:12:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:57.520 01:12:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:57.520 01:12:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:57.520 01:12:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:57.520 Attaching to 0000:00:10.0 00:45:57.520 Attached to 0000:00:10.0 00:45:57.520 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:45:57.520 [2024-07-25 01:12:20.045658] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:46:04.089 01:12:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:46:04.089 01:12:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:04.089 01:12:26 sw_hotplug -- common/autotest_common.sh@715 -- # time=24.76 00:46:04.089 01:12:26 sw_hotplug -- common/autotest_common.sh@716 -- # echo 24.76 00:46:04.089 01:12:26 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:46:04.089 01:12:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.76 00:46:04.089 01:12:26 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.76 1 00:46:04.089 remove_attach_helper took 24.76s to complete (handling 1 nvme drive(s)) 01:12:26 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:46:10.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:10.679 01:12:32 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 173379 00:46:10.679 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (173379) - No such process 00:46:10.679 01:12:32 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 173379 00:46:10.679 01:12:32 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:46:10.680 01:12:32 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:46:10.680 01:12:32 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:46:10.680 01:12:32 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=173724 00:46:10.680 01:12:32 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:46:10.680 01:12:32 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 173724 00:46:10.680 01:12:32 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:10.680 01:12:32 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 173724 ']' 00:46:10.680 01:12:32 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:10.680 01:12:32 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:10.680 01:12:32 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:10.680 01:12:32 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:10.680 01:12:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:10.680 [2024-07-25 01:12:32.142964] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:46:10.680 [2024-07-25 01:12:32.143499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173724 ] 00:46:10.680 [2024-07-25 01:12:32.322569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:10.680 [2024-07-25 01:12:32.518803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:10.680 01:12:33 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:10.680 01:12:33 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:46:10.680 01:12:33 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:46:10.680 01:12:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:10.680 01:12:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:10.680 01:12:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:10.680 01:12:33 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:46:10.680 01:12:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:46:10.680 01:12:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:46:10.680 01:12:33 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:46:10.680 01:12:33 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:46:10.680 01:12:33 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:46:10.680 01:12:33 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:46:10.680 01:12:33 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:46:10.680 01:12:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:46:10.680 01:12:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:46:10.680 01:12:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:46:10.680 01:12:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:46:10.680 01:12:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:46:17.245 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:17.245 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:17.245 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:17.245 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:17.245 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:17.245 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:17.245 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:17.245 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:17.245 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:17.245 01:12:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:17.245 01:12:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:17.245 01:12:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:17.245 [2024-07-25 01:12:39.398114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:46:17.245 [2024-07-25 01:12:39.400132] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:17.245 [2024-07-25 01:12:39.400299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:46:17.245 [2024-07-25 01:12:39.400395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:17.245 [2024-07-25 01:12:39.400463] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:17.245 [2024-07-25 01:12:39.400550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:46:17.245 [2024-07-25 01:12:39.400640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:17.245 [2024-07-25 01:12:39.400788] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:17.245 [2024-07-25 01:12:39.400889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:46:17.245 [2024-07-25 01:12:39.400988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:17.245 [2024-07-25 01:12:39.401087] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:17.245 [2024-07-25 01:12:39.401183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:46:17.245 [2024-07-25 01:12:39.401272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:17.245 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:17.246 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:17.505 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:17.505 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:17.505 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:17.505 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:17.505 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:17.505 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:17.505 01:12:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:17.505 01:12:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:17.505 01:12:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:17.505 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:46:17.505 01:12:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:46:17.505 01:12:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:17.505 01:12:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:17.505 01:12:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:46:17.505 01:12:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:46:17.763 01:12:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:17.763 01:12:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:24.326 01:12:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:24.326 01:12:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:24.326 01:12:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:24.326 [2024-07-25 01:12:46.298213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:46:24.326 [2024-07-25 01:12:46.300285] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:24.326 [2024-07-25 01:12:46.300442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:46:24.326 [2024-07-25 01:12:46.300533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:24.326 [2024-07-25 01:12:46.300591] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:24.326 [2024-07-25 01:12:46.300611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:46:24.326 [2024-07-25 01:12:46.300649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:24.326 [2024-07-25 01:12:46.300671] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:24.326 [2024-07-25 01:12:46.300717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:46:24.326 [2024-07-25 01:12:46.300737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:24.326 [2024-07-25 01:12:46.300765] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:24.326 [2024-07-25 01:12:46.300784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:46:24.326 [2024-07-25 01:12:46.300818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:24.326 01:12:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:24.326 01:12:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:24.326 01:12:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:24.326 01:12:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:30.919 01:12:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:30.919 01:12:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:30.919 01:12:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:30.919 01:12:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:30.919 01:12:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:30.919 [2024-07-25 01:12:52.698345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:46:30.919 [2024-07-25 01:12:52.700152] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:30.919 [2024-07-25 01:12:52.700295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:46:30.919 [2024-07-25 01:12:52.700390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:30.919 [2024-07-25 01:12:52.700444] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:30.919 [2024-07-25 01:12:52.700533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:46:30.919 [2024-07-25 01:12:52.700618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:30.919 [2024-07-25 01:12:52.700734] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:30.919 [2024-07-25 01:12:52.700830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:46:30.919 [2024-07-25 01:12:52.700933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:30.919 [2024-07-25 01:12:52.701021] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:30.919 [2024-07-25 01:12:52.701125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:46:30.919 [2024-07-25 01:12:52.701212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:30.919 01:12:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:30.919 01:12:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:46:37.497 01:12:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:46:37.497 01:12:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:46:37.497 01:12:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:46:37.497 01:12:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:37.497 01:12:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:37.497 01:12:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:37.497 01:12:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:37.497 01:12:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:37.497 01:12:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:37.497 01:12:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:46:37.497 01:12:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:37.497 01:12:58 sw_hotplug -- common/autotest_common.sh@715 -- # time=25.69 00:46:37.497 01:12:58 sw_hotplug -- common/autotest_common.sh@716 -- # echo 25.69 00:46:37.497 01:12:58 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:46:37.497 01:12:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.69 00:46:37.497 01:12:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.69 1 00:46:37.497 remove_attach_helper took 25.69s to complete (handling 1 nvme drive(s)) 01:12:58 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:46:37.497 01:12:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:37.497 01:12:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:37.497 01:12:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:37.497 01:12:59 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:46:37.497 01:12:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:37.497 01:12:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:37.497 01:12:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:37.497 01:12:59 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:46:37.497 01:12:59 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:46:37.497 01:12:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:46:37.497 01:12:59 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:46:37.497 01:12:59 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:46:37.497 01:12:59 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:46:37.497 01:12:59 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:46:37.497 01:12:59 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:46:37.497 01:12:59 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:46:37.497 01:12:59 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:46:37.497 01:12:59 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:46:37.497 01:12:59 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:46:37.497 01:12:59 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:46:42.766 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:42.766 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:42.766 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:42.766 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:42.766 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:42.766 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:42.766 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:42.766 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:42.766 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:42.766 01:13:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:42.766 01:13:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:42.766 [2024-07-25 01:13:05.111706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:46:42.766 [2024-07-25 01:13:05.113441] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:42.766 [2024-07-25 01:13:05.113492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:46:42.766 [2024-07-25 01:13:05.113521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:42.766 [2024-07-25 01:13:05.113568] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:42.766 [2024-07-25 01:13:05.113586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:46:42.766 [2024-07-25 01:13:05.113613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:42.766 [2024-07-25 01:13:05.113632] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:42.766 [2024-07-25 01:13:05.113660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:46:42.766 [2024-07-25 01:13:05.113677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:42.766 [2024-07-25 01:13:05.113709] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:42.766 [2024-07-25 01:13:05.113727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:46:42.766 [2024-07-25 01:13:05.113750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:42.766 01:13:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:42.766 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:42.766 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:43.025 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:43.025 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:43.025 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:43.025 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:43.025 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:43.025 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:43.025 01:13:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:43.025 01:13:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:43.025 01:13:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:43.284 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:46:43.284 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:46:43.284 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:43.284 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:43.284 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:46:43.284 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:46:43.284 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:43.284 01:13:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:46:49.839 01:13:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:46:49.839 01:13:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:46:49.839 01:13:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:46:49.839 01:13:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:49.839 01:13:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:49.839 01:13:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:49.839 01:13:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:49.839 01:13:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:49.839 01:13:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:49.839 01:13:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:46:49.839 01:13:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:49.839 01:13:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:49.839 01:13:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:49.839 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:49.839 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:49.839 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:49.839 [2024-07-25 01:13:12.011844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:46:49.839 [2024-07-25 01:13:12.014194] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:49.839 [2024-07-25 01:13:12.014371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:46:49.839 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:49.839 [2024-07-25 01:13:12.014506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:49.839 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:49.839 [2024-07-25 01:13:12.014612] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:49.839 [2024-07-25 01:13:12.014709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:46:49.839 [2024-07-25 01:13:12.014806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:49.839 [2024-07-25 01:13:12.014908] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:49.839 01:13:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:49.839 [2024-07-25 01:13:12.014996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:46:49.839 [2024-07-25 01:13:12.015114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:49.839 01:13:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:49.839 [2024-07-25 01:13:12.015212] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:49.839 [2024-07-25 01:13:12.015303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:46:49.840 [2024-07-25 01:13:12.015404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:49.840 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:49.840 01:13:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:49.840 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:46:49.840 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:46:49.840 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:49.840 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:49.840 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:46:49.840 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:46:49.840 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:49.840 01:13:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:56.392 01:13:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:56.392 01:13:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:56.392 01:13:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:56.392 [2024-07-25 01:13:18.311950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:46:56.392 [2024-07-25 01:13:18.314010] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:56.392 [2024-07-25 01:13:18.314190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:46:56.392 [2024-07-25 01:13:18.314325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:56.392 [2024-07-25 01:13:18.314476] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:56.392 [2024-07-25 01:13:18.314579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:46:56.392 [2024-07-25 01:13:18.314664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:56.392 [2024-07-25 01:13:18.314720] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:56.392 [2024-07-25 01:13:18.314806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:46:56.392 [2024-07-25 01:13:18.314898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:56.392 [2024-07-25 01:13:18.314950] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:56.392 [2024-07-25 01:13:18.315098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:46:56.392 [2024-07-25 01:13:18.315149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:56.392 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:56.393 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:56.393 01:13:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:56.393 01:13:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:56.393 01:13:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:56.393 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:46:56.393 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:46:56.393 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:56.393 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:56.393 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:46:56.393 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:46:56.393 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:56.393 01:13:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:47:02.955 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:47:02.955 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:47:02.955 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:47:02.955 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:02.955 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:02.955 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:02.955 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:47:02.955 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@715 -- # time=25.63 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@716 -- # echo 25.63 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:47:02.955 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.63 00:47:02.955 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.63 1 00:47:02.955 remove_attach_helper took 25.63s to complete (handling 1 nvme drive(s)) 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:47:02.955 01:13:24 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 173724 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 173724 ']' 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 173724 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 173724 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:02.955 killing process with pid 173724 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 173724' 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@967 -- # kill 173724 00:47:02.955 01:13:24 sw_hotplug -- common/autotest_common.sh@972 -- # wait 173724 00:47:04.856 01:13:27 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:47:05.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:05.150 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:47:06.106 ************************************ 00:47:06.106 END TEST sw_hotplug 00:47:06.106 ************************************ 00:47:06.106 00:47:06.106 real 1m30.871s 00:47:06.106 user 1m4.227s 00:47:06.106 sys 0m16.996s 00:47:06.106 01:13:28 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:06.106 01:13:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:06.106 01:13:28 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:47:06.106 01:13:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:47:06.106 01:13:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:47:06.106 01:13:28 -- common/autotest_common.sh@10 -- # set +x 00:47:06.106 01:13:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:47:06.106 01:13:28 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:47:06.107 01:13:28 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:47:06.107 01:13:28 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:47:06.107 01:13:28 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:47:06.107 01:13:28 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:47:06.107 01:13:28 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:47:06.107 01:13:28 -- spdk/autotest.sh@375 -- # [[ 1 -eq 1 ]] 00:47:06.107 01:13:28 -- spdk/autotest.sh@376 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:47:06.107 01:13:28 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:47:06.107 01:13:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:06.107 01:13:28 -- common/autotest_common.sh@10 -- # set +x 00:47:06.107 ************************************ 00:47:06.107 START TEST blockdev_raid5f 00:47:06.107 ************************************ 00:47:06.107 01:13:28 blockdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:47:06.107 * Looking for test storage... 00:47:06.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=174597 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:47:06.107 01:13:28 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 174597 00:47:06.107 01:13:28 blockdev_raid5f -- common/autotest_common.sh@829 -- # '[' -z 174597 ']' 00:47:06.107 01:13:28 blockdev_raid5f -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:06.107 01:13:28 blockdev_raid5f -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:06.107 01:13:28 blockdev_raid5f -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:06.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:06.107 01:13:28 blockdev_raid5f -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:06.107 01:13:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:06.366 [2024-07-25 01:13:28.791060] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:47:06.366 [2024-07-25 01:13:28.791666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174597 ] 00:47:06.366 [2024-07-25 01:13:28.979748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:06.625 [2024-07-25 01:13:29.264829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@862 -- # return 0 00:47:07.561 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:47:07.561 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:47:07.561 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:07.561 Malloc0 00:47:07.561 Malloc1 00:47:07.561 Malloc2 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:07.561 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:07.561 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:47:07.561 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:07.561 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:07.561 01:13:30 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:07.820 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:47:07.820 01:13:30 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:07.820 01:13:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:07.820 01:13:30 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:07.820 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:47:07.820 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:47:07.820 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:47:07.820 01:13:30 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:07.820 01:13:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:07.820 01:13:30 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:07.820 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:47:07.821 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "4d0c7c3d-9a6c-4678-a607-48c6d1f354c8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4d0c7c3d-9a6c-4678-a607-48c6d1f354c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "4d0c7c3d-9a6c-4678-a607-48c6d1f354c8",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b9f0d5b8-8c9c-4979-9beb-a680fd5e2693",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d5677dd8-2ee3-41da-a626-40d4aac0c75e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "20f488c7-ee0c-45ee-be58-539dca48cfc8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:47:07.821 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:47:07.821 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:47:07.821 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:47:07.821 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:47:07.821 01:13:30 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 174597 00:47:07.821 01:13:30 blockdev_raid5f -- common/autotest_common.sh@948 -- # '[' -z 174597 ']' 00:47:07.821 01:13:30 blockdev_raid5f -- common/autotest_common.sh@952 -- # kill -0 174597 00:47:07.821 01:13:30 blockdev_raid5f -- common/autotest_common.sh@953 -- # uname 00:47:07.821 01:13:30 blockdev_raid5f -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:07.821 01:13:30 blockdev_raid5f -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 174597 00:47:07.821 01:13:30 blockdev_raid5f -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:07.821 01:13:30 blockdev_raid5f -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:07.821 01:13:30 blockdev_raid5f -- common/autotest_common.sh@966 -- # echo 'killing process with pid 174597' 00:47:07.821 killing process with pid 174597 00:47:07.821 01:13:30 blockdev_raid5f -- common/autotest_common.sh@967 -- # kill 174597 00:47:07.821 01:13:30 blockdev_raid5f -- common/autotest_common.sh@972 -- # wait 174597 00:47:11.113 01:13:33 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:47:11.113 01:13:33 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:47:11.113 01:13:33 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:47:11.113 01:13:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:11.113 01:13:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:11.113 ************************************ 00:47:11.113 START TEST bdev_hello_world 00:47:11.113 ************************************ 00:47:11.113 01:13:33 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:47:11.113 [2024-07-25 01:13:33.214938] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:47:11.113 [2024-07-25 01:13:33.215353] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174671 ] 00:47:11.113 [2024-07-25 01:13:33.395459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:11.113 [2024-07-25 01:13:33.609374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:11.679 [2024-07-25 01:13:34.173465] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:47:11.679 [2024-07-25 01:13:34.173731] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:47:11.679 [2024-07-25 01:13:34.173803] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:47:11.679 [2024-07-25 01:13:34.174397] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:47:11.679 [2024-07-25 01:13:34.174647] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:47:11.679 [2024-07-25 01:13:34.174800] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:47:11.679 [2024-07-25 01:13:34.174915] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:47:11.679 00:47:11.679 [2024-07-25 01:13:34.175104] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:47:13.581 00:47:13.581 real 0m2.577s 00:47:13.581 user 0m2.206s 00:47:13.581 sys 0m0.253s 00:47:13.581 ************************************ 00:47:13.581 END TEST bdev_hello_world 00:47:13.581 ************************************ 00:47:13.581 01:13:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:13.581 01:13:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:47:13.581 01:13:35 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:47:13.581 01:13:35 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:47:13.581 01:13:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:13.581 01:13:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:13.581 ************************************ 00:47:13.581 START TEST bdev_bounds 00:47:13.581 ************************************ 00:47:13.581 01:13:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:47:13.581 01:13:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=174722 00:47:13.581 01:13:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:47:13.581 01:13:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:47:13.581 Process bdevio pid: 174722 00:47:13.581 01:13:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 174722' 00:47:13.581 01:13:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 174722 00:47:13.581 01:13:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 174722 ']' 00:47:13.581 01:13:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:13.581 01:13:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:13.582 01:13:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:13.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:13.582 01:13:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:13.582 01:13:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:47:13.582 [2024-07-25 01:13:35.860501] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:47:13.582 [2024-07-25 01:13:35.860930] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174722 ] 00:47:13.582 [2024-07-25 01:13:36.049673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:13.839 [2024-07-25 01:13:36.255493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:13.839 [2024-07-25 01:13:36.255673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:13.839 [2024-07-25 01:13:36.255677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:47:14.406 01:13:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:14.406 01:13:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:47:14.406 01:13:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:47:14.406 I/O targets: 00:47:14.406 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:47:14.406 00:47:14.406 00:47:14.406 CUnit - A unit testing framework for C - Version 2.1-3 00:47:14.406 http://cunit.sourceforge.net/ 00:47:14.406 00:47:14.406 00:47:14.406 Suite: bdevio tests on: raid5f 00:47:14.406 Test: blockdev write read block ...passed 00:47:14.406 Test: blockdev write zeroes read block ...passed 00:47:14.406 Test: blockdev write zeroes read no split ...passed 00:47:14.406 Test: blockdev write zeroes read split ...passed 00:47:14.665 Test: blockdev write zeroes read split partial ...passed 00:47:14.665 Test: blockdev reset ...passed 00:47:14.665 Test: blockdev write read 8 blocks ...passed 00:47:14.665 Test: blockdev write read size > 128k ...passed 00:47:14.665 Test: blockdev write read invalid size ...passed 00:47:14.665 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:47:14.665 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:47:14.665 Test: blockdev write read max offset ...passed 00:47:14.665 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:47:14.665 Test: blockdev writev readv 8 blocks ...passed 00:47:14.665 Test: blockdev writev readv 30 x 1block ...passed 00:47:14.665 Test: blockdev writev readv block ...passed 00:47:14.665 Test: blockdev writev readv size > 128k ...passed 00:47:14.665 Test: blockdev writev readv size > 128k in two iovs ...passed 00:47:14.665 Test: blockdev comparev and writev ...passed 00:47:14.665 Test: blockdev nvme passthru rw ...passed 00:47:14.665 Test: blockdev nvme passthru vendor specific ...passed 00:47:14.665 Test: blockdev nvme admin passthru ...passed 00:47:14.665 Test: blockdev copy ...passed 00:47:14.665 00:47:14.665 Run Summary: Type Total Ran Passed Failed Inactive 00:47:14.665 suites 1 1 n/a 0 0 00:47:14.665 tests 23 23 23 0 0 00:47:14.665 asserts 130 130 130 0 n/a 00:47:14.665 00:47:14.665 Elapsed time = 0.527 seconds 00:47:14.665 0 00:47:14.665 01:13:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 174722 00:47:14.665 01:13:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 174722 ']' 00:47:14.665 01:13:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 174722 00:47:14.665 01:13:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:47:14.665 01:13:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:14.665 01:13:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 174722 00:47:14.665 01:13:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:14.665 01:13:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:14.665 01:13:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 174722' 00:47:14.665 killing process with pid 174722 00:47:14.665 01:13:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@967 -- # kill 174722 00:47:14.665 01:13:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # wait 174722 00:47:16.579 01:13:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:47:16.579 00:47:16.579 real 0m2.946s 00:47:16.579 user 0m6.928s 00:47:16.579 sys 0m0.404s 00:47:16.579 ************************************ 00:47:16.579 END TEST bdev_bounds 00:47:16.579 ************************************ 00:47:16.579 01:13:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:16.579 01:13:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:47:16.579 01:13:38 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:47:16.579 01:13:38 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:47:16.579 01:13:38 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:16.579 01:13:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:16.579 ************************************ 00:47:16.579 START TEST bdev_nbd 00:47:16.579 ************************************ 00:47:16.579 01:13:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=174791 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 174791 /var/tmp/spdk-nbd.sock 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 174791 ']' 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:47:16.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:16.580 01:13:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:47:16.580 [2024-07-25 01:13:38.894736] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:47:16.580 [2024-07-25 01:13:38.895151] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:16.580 [2024-07-25 01:13:39.084409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:16.839 [2024-07-25 01:13:39.337859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:47:17.406 01:13:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:17.665 1+0 records in 00:47:17.665 1+0 records out 00:47:17.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630669 s, 6.5 MB/s 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:47:17.665 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:17.924 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:47:17.924 { 00:47:17.924 "nbd_device": "/dev/nbd0", 00:47:17.924 "bdev_name": "raid5f" 00:47:17.924 } 00:47:17.924 ]' 00:47:17.924 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:47:17.924 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:47:17.924 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:47:17.924 { 00:47:17.924 "nbd_device": "/dev/nbd0", 00:47:17.924 "bdev_name": "raid5f" 00:47:17.924 } 00:47:17.924 ]' 00:47:17.924 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:17.924 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:17.924 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:47:17.924 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:17.924 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:17.924 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:17.924 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:18.182 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:47:18.441 01:13:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:47:18.441 /dev/nbd0 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:18.441 1+0 records in 00:47:18.441 1+0 records out 00:47:18.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072699 s, 5.6 MB/s 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:47:18.441 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:18.700 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:47:18.700 01:13:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:47:18.700 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:18.700 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:47:18.700 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:18.700 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:18.700 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:47:18.960 { 00:47:18.960 "nbd_device": "/dev/nbd0", 00:47:18.960 "bdev_name": "raid5f" 00:47:18.960 } 00:47:18.960 ]' 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:47:18.960 { 00:47:18.960 "nbd_device": "/dev/nbd0", 00:47:18.960 "bdev_name": "raid5f" 00:47:18.960 } 00:47:18.960 ]' 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:47:18.960 256+0 records in 00:47:18.960 256+0 records out 00:47:18.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00977833 s, 107 MB/s 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:47:18.960 256+0 records in 00:47:18.960 256+0 records out 00:47:18.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.048349 s, 21.7 MB/s 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:18.960 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:19.219 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:19.219 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:19.219 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:19.219 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:19.219 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:19.219 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:19.219 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:19.219 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:19.219 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:19.219 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:19.219 01:13:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:47:19.477 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:47:19.735 malloc_lvol_verify 00:47:19.735 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:47:19.993 9d309a8a-d87e-4108-bafa-a0001e57f192 00:47:19.993 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:47:20.251 630a37cd-8125-4492-bc74-9d0e3e408542 00:47:20.251 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:47:20.509 /dev/nbd0 00:47:20.509 01:13:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:47:20.509 mke2fs 1.46.5 (30-Dec-2021) 00:47:20.509 00:47:20.509 Filesystem too small for a journal 00:47:20.510 Discarding device blocks: 0/1024 done 00:47:20.510 Creating filesystem with 1024 4k blocks and 1024 inodes 00:47:20.510 00:47:20.510 Allocating group tables: 0/1 done 00:47:20.510 Writing inode tables: 0/1 done 00:47:20.510 Writing superblocks and filesystem accounting information: 0/1 done 00:47:20.510 00:47:20.510 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:47:20.510 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:20.510 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:20.510 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:47:20.510 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:20.510 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:20.510 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:20.510 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 174791 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 174791 ']' 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 174791 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 174791 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 174791' 00:47:20.768 killing process with pid 174791 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@967 -- # kill 174791 00:47:20.768 01:13:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # wait 174791 00:47:22.669 01:13:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:47:22.669 00:47:22.669 real 0m6.237s 00:47:22.669 user 0m8.188s 00:47:22.669 sys 0m1.426s 00:47:22.669 ************************************ 00:47:22.669 END TEST bdev_nbd 00:47:22.669 ************************************ 00:47:22.669 01:13:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:22.669 01:13:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:47:22.669 01:13:45 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:47:22.669 01:13:45 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:47:22.669 01:13:45 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:47:22.669 01:13:45 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:47:22.669 01:13:45 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:47:22.669 01:13:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:22.669 01:13:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:22.669 ************************************ 00:47:22.669 START TEST bdev_fio 00:47:22.669 ************************************ 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:47:22.669 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1278 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local workload=verify 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local bdev_type=AIO 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local env_context= 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local fio_dir=/usr/src/fio 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1289 -- # '[' -z verify ']' 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -n '' ']' 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # cat 00:47:22.669 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1311 -- # '[' verify == verify ']' 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1312 -- # cat 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1321 -- # '[' AIO == AIO ']' 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1322 -- # /usr/src/fio/fio --version 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1322 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # echo serialize_overlap=1 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:47:22.670 ************************************ 00:47:22.670 START TEST bdev_fio_rw_verify 00:47:22.670 ************************************ 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local sanitizers 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # shift 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local asan_lib= 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # grep libasan 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # break 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:47:22.670 01:13:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:47:22.929 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:47:22.929 fio-3.35 00:47:22.929 Starting 1 thread 00:47:35.170 00:47:35.170 job_raid5f: (groupid=0, jobs=1): err= 0: pid=175040: Thu Jul 25 01:13:56 2024 00:47:35.170 read: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(428MiB/10001msec) 00:47:35.170 slat (usec): min=17, max=215, avg=21.75, stdev= 3.92 00:47:35.170 clat (usec): min=10, max=541, avg=146.30, stdev=54.81 00:47:35.170 lat (usec): min=29, max=573, avg=168.06, stdev=55.81 00:47:35.170 clat percentiles (usec): 00:47:35.170 | 50.000th=[ 145], 99.000th=[ 277], 99.900th=[ 343], 99.990th=[ 408], 00:47:35.170 | 99.999th=[ 510] 00:47:35.170 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(445MiB/9880msec); 0 zone resets 00:47:35.170 slat (usec): min=7, max=262, avg=18.51, stdev= 4.74 00:47:35.170 clat (usec): min=60, max=1186, avg=331.06, stdev=57.24 00:47:35.170 lat (usec): min=76, max=1448, avg=349.57, stdev=59.46 00:47:35.170 clat percentiles (usec): 00:47:35.170 | 50.000th=[ 326], 99.000th=[ 502], 99.900th=[ 586], 99.990th=[ 848], 00:47:35.170 | 99.999th=[ 1123] 00:47:35.170 bw ( KiB/s): min=40464, max=49816, per=98.73%, avg=45513.26, stdev=2556.54, samples=19 00:47:35.170 iops : min=10116, max=12454, avg=11378.32, stdev=639.13, samples=19 00:47:35.170 lat (usec) : 20=0.01%, 50=0.01%, 100=11.47%, 250=38.97%, 500=49.00% 00:47:35.170 lat (usec) : 750=0.54%, 1000=0.01% 00:47:35.170 lat (msec) : 2=0.01% 00:47:35.170 cpu : usr=99.28%, sys=0.68%, ctx=112, majf=0, minf=7818 00:47:35.170 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:35.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:35.170 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:35.170 issued rwts: total=109580,113861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:35.170 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:35.170 00:47:35.170 Run status group 0 (all jobs): 00:47:35.170 READ: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=428MiB (449MB), run=10001-10001msec 00:47:35.170 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=445MiB (466MB), run=9880-9880msec 00:47:35.735 ----------------------------------------------------- 00:47:35.735 Suppressions used: 00:47:35.735 count bytes template 00:47:35.735 1 7 /usr/src/fio/parse.c 00:47:35.735 865 83040 /usr/src/fio/iolog.c 00:47:35.735 1 904 libcrypto.so 00:47:35.735 ----------------------------------------------------- 00:47:35.735 00:47:35.735 ************************************ 00:47:35.735 END TEST bdev_fio_rw_verify 00:47:35.735 ************************************ 00:47:35.735 00:47:35.735 real 0m12.978s 00:47:35.735 user 0m13.620s 00:47:35.735 sys 0m0.709s 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1278 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local workload=trim 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local bdev_type= 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local env_context= 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local fio_dir=/usr/src/fio 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:47:35.735 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1289 -- # '[' -z trim ']' 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -n '' ']' 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # cat 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1311 -- # '[' trim == verify ']' 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # '[' trim == trim ']' 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo rw=trimwrite 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "4d0c7c3d-9a6c-4678-a607-48c6d1f354c8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4d0c7c3d-9a6c-4678-a607-48c6d1f354c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "4d0c7c3d-9a6c-4678-a607-48c6d1f354c8",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b9f0d5b8-8c9c-4979-9beb-a680fd5e2693",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d5677dd8-2ee3-41da-a626-40d4aac0c75e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "20f488c7-ee0c-45ee-be58-539dca48cfc8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:47:35.736 /home/vagrant/spdk_repo/spdk 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:47:35.736 00:47:35.736 real 0m13.194s 00:47:35.736 user 0m13.739s 00:47:35.736 sys 0m0.800s 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:35.736 01:13:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:47:35.736 ************************************ 00:47:35.736 END TEST bdev_fio 00:47:35.736 ************************************ 00:47:35.736 01:13:58 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:47:35.736 01:13:58 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:47:35.736 01:13:58 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:47:35.736 01:13:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:35.736 01:13:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:35.736 ************************************ 00:47:35.736 START TEST bdev_verify 00:47:35.736 ************************************ 00:47:35.736 01:13:58 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:47:35.995 [2024-07-25 01:13:58.443884] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:47:35.995 [2024-07-25 01:13:58.444100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175203 ] 00:47:35.995 [2024-07-25 01:13:58.630560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:36.254 [2024-07-25 01:13:58.849977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:36.254 [2024-07-25 01:13:58.849981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:36.822 Running I/O for 5 seconds... 00:47:42.090 00:47:42.090 Latency(us) 00:47:42.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:42.090 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:47:42.090 Verification LBA range: start 0x0 length 0x2000 00:47:42.090 raid5f : 5.01 6833.21 26.69 0.00 0.00 27851.81 217.48 23218.47 00:47:42.090 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:47:42.090 Verification LBA range: start 0x2000 length 0x2000 00:47:42.090 raid5f : 5.01 6830.00 26.68 0.00 0.00 28107.68 182.37 23218.47 00:47:42.091 =================================================================================================================== 00:47:42.091 Total : 13663.20 53.37 0.00 0.00 27979.75 182.37 23218.47 00:47:43.475 ************************************ 00:47:43.475 END TEST bdev_verify 00:47:43.475 ************************************ 00:47:43.475 00:47:43.475 real 0m7.568s 00:47:43.475 user 0m13.859s 00:47:43.475 sys 0m0.216s 00:47:43.475 01:14:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:43.475 01:14:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:47:43.475 01:14:05 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:47:43.475 01:14:05 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:47:43.475 01:14:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:43.475 01:14:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:43.475 ************************************ 00:47:43.475 START TEST bdev_verify_big_io 00:47:43.475 ************************************ 00:47:43.475 01:14:05 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:47:43.475 [2024-07-25 01:14:06.047039] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:47:43.475 [2024-07-25 01:14:06.047186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175319 ] 00:47:43.733 [2024-07-25 01:14:06.210556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:43.991 [2024-07-25 01:14:06.402657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:43.991 [2024-07-25 01:14:06.402660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:44.558 Running I/O for 5 seconds... 00:47:49.847 00:47:49.847 Latency(us) 00:47:49.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:49.847 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:47:49.847 Verification LBA range: start 0x0 length 0x200 00:47:49.847 raid5f : 5.16 479.27 29.95 0.00 0.00 6524691.21 143.36 283614.84 00:47:49.847 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:47:49.847 Verification LBA range: start 0x200 length 0x200 00:47:49.847 raid5f : 5.10 472.98 29.56 0.00 0.00 6676715.19 153.11 285612.13 00:47:49.847 =================================================================================================================== 00:47:49.847 Total : 952.25 59.52 0.00 0.00 6599739.06 143.36 285612.13 00:47:51.224 00:47:51.224 real 0m7.671s 00:47:51.224 user 0m14.087s 00:47:51.224 sys 0m0.280s 00:47:51.224 01:14:13 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:51.224 01:14:13 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:47:51.224 ************************************ 00:47:51.224 END TEST bdev_verify_big_io 00:47:51.224 ************************************ 00:47:51.224 01:14:13 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:51.224 01:14:13 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:47:51.224 01:14:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:51.224 01:14:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:51.224 ************************************ 00:47:51.224 START TEST bdev_write_zeroes 00:47:51.224 ************************************ 00:47:51.224 01:14:13 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:51.224 [2024-07-25 01:14:13.779095] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:47:51.224 [2024-07-25 01:14:13.779247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175421 ] 00:47:51.483 [2024-07-25 01:14:13.935492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:51.483 [2024-07-25 01:14:14.125381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:52.058 Running I/O for 1 seconds... 00:47:52.993 00:47:52.993 Latency(us) 00:47:52.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:52.993 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:47:52.993 raid5f : 1.00 27098.42 105.85 0.00 0.00 4708.96 1365.33 5554.96 00:47:52.993 =================================================================================================================== 00:47:52.993 Total : 27098.42 105.85 0.00 0.00 4708.96 1365.33 5554.96 00:47:54.917 00:47:54.917 real 0m3.450s 00:47:54.917 user 0m3.108s 00:47:54.917 sys 0m0.228s 00:47:54.917 01:14:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:54.917 01:14:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:47:54.917 ************************************ 00:47:54.917 END TEST bdev_write_zeroes 00:47:54.917 ************************************ 00:47:54.917 01:14:17 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:54.917 01:14:17 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:47:54.917 01:14:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:54.917 01:14:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:54.917 ************************************ 00:47:54.917 START TEST bdev_json_nonenclosed 00:47:54.917 ************************************ 00:47:54.917 01:14:17 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:54.917 [2024-07-25 01:14:17.317297] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:47:54.917 [2024-07-25 01:14:17.317526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175485 ] 00:47:54.917 [2024-07-25 01:14:17.499855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:55.175 [2024-07-25 01:14:17.695073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:55.175 [2024-07-25 01:14:17.695176] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:47:55.175 [2024-07-25 01:14:17.695221] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:47:55.175 [2024-07-25 01:14:17.695252] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:55.741 00:47:55.741 real 0m0.885s 00:47:55.741 user 0m0.616s 00:47:55.741 sys 0m0.169s 00:47:55.741 01:14:18 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:55.741 01:14:18 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:47:55.741 ************************************ 00:47:55.741 END TEST bdev_json_nonenclosed 00:47:55.741 ************************************ 00:47:55.741 01:14:18 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:55.741 01:14:18 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:47:55.741 01:14:18 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:55.741 01:14:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:55.741 ************************************ 00:47:55.741 START TEST bdev_json_nonarray 00:47:55.741 ************************************ 00:47:55.741 01:14:18 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:55.741 [2024-07-25 01:14:18.245261] Starting SPDK v24.09-pre git sha1 6e4acbb0d / DPDK 24.03.0 initialization... 00:47:55.741 [2024-07-25 01:14:18.245410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175523 ] 00:47:56.000 [2024-07-25 01:14:18.402909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:56.000 [2024-07-25 01:14:18.587926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:56.000 [2024-07-25 01:14:18.588043] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:47:56.000 [2024-07-25 01:14:18.588093] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:47:56.000 [2024-07-25 01:14:18.588119] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:56.566 00:47:56.566 real 0m0.828s 00:47:56.566 user 0m0.596s 00:47:56.566 sys 0m0.132s 00:47:56.566 01:14:19 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:56.566 01:14:19 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:47:56.566 ************************************ 00:47:56.566 END TEST bdev_json_nonarray 00:47:56.567 ************************************ 00:47:56.567 01:14:19 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:47:56.567 01:14:19 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:47:56.567 01:14:19 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:47:56.567 01:14:19 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:47:56.567 01:14:19 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:47:56.567 01:14:19 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:47:56.567 01:14:19 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:56.567 01:14:19 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:47:56.567 01:14:19 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:47:56.567 01:14:19 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:47:56.567 01:14:19 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:47:56.567 ************************************ 00:47:56.567 END TEST blockdev_raid5f 00:47:56.567 ************************************ 00:47:56.567 00:47:56.567 real 0m50.488s 00:47:56.567 user 1m8.263s 00:47:56.567 sys 0m4.774s 00:47:56.567 01:14:19 blockdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:56.567 01:14:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:56.567 01:14:19 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:47:56.567 01:14:19 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:47:56.567 01:14:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:47:56.567 01:14:19 -- common/autotest_common.sh@10 -- # set +x 00:47:56.567 01:14:19 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:47:56.567 01:14:19 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:47:56.567 01:14:19 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:47:56.567 01:14:19 -- common/autotest_common.sh@10 -- # set +x 00:47:58.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:58.468 Waiting for block devices as requested 00:47:58.727 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:47:59.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:59.293 Cleaning 00:47:59.293 Removing: /var/run/dpdk/spdk0/config 00:47:59.293 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:59.293 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:59.293 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:59.293 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:59.293 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:59.293 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:59.293 Removing: /dev/shm/spdk_tgt_trace.pid112142 00:47:59.293 Removing: /var/run/dpdk/spdk0 00:47:59.293 Removing: /var/run/dpdk/spdk_pid111879 00:47:59.293 Removing: /var/run/dpdk/spdk_pid112142 00:47:59.293 Removing: /var/run/dpdk/spdk_pid112396 00:47:59.293 Removing: /var/run/dpdk/spdk_pid112522 00:47:59.293 Removing: /var/run/dpdk/spdk_pid112586 00:47:59.293 Removing: /var/run/dpdk/spdk_pid112727 00:47:59.293 Removing: /var/run/dpdk/spdk_pid112757 00:47:59.293 Removing: /var/run/dpdk/spdk_pid112920 00:47:59.293 Removing: /var/run/dpdk/spdk_pid113197 00:47:59.293 Removing: /var/run/dpdk/spdk_pid113383 00:47:59.293 Removing: /var/run/dpdk/spdk_pid113506 00:47:59.293 Removing: /var/run/dpdk/spdk_pid113628 00:47:59.293 Removing: /var/run/dpdk/spdk_pid113752 00:47:59.293 Removing: /var/run/dpdk/spdk_pid113882 00:47:59.293 Removing: /var/run/dpdk/spdk_pid113934 00:47:59.293 Removing: /var/run/dpdk/spdk_pid113980 00:47:59.293 Removing: /var/run/dpdk/spdk_pid114058 00:47:59.293 Removing: /var/run/dpdk/spdk_pid114183 00:47:59.293 Removing: /var/run/dpdk/spdk_pid114725 00:47:59.293 Removing: /var/run/dpdk/spdk_pid114809 00:47:59.293 Removing: /var/run/dpdk/spdk_pid114903 00:47:59.293 Removing: /var/run/dpdk/spdk_pid114924 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115105 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115126 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115297 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115328 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115406 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115434 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115510 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115533 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115751 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115797 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115845 00:47:59.293 Removing: /var/run/dpdk/spdk_pid115937 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116031 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116085 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116191 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116242 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116305 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116370 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116434 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116491 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116557 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116615 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116678 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116732 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116793 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116857 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116915 00:47:59.293 Removing: /var/run/dpdk/spdk_pid116981 00:47:59.293 Removing: /var/run/dpdk/spdk_pid117044 00:47:59.293 Removing: /var/run/dpdk/spdk_pid117102 00:47:59.293 Removing: /var/run/dpdk/spdk_pid117165 00:47:59.293 Removing: /var/run/dpdk/spdk_pid117233 00:47:59.551 Removing: /var/run/dpdk/spdk_pid117294 00:47:59.551 Removing: /var/run/dpdk/spdk_pid117357 00:47:59.551 Removing: /var/run/dpdk/spdk_pid117422 00:47:59.551 Removing: /var/run/dpdk/spdk_pid117514 00:47:59.551 Removing: /var/run/dpdk/spdk_pid117669 00:47:59.551 Removing: /var/run/dpdk/spdk_pid117864 00:47:59.551 Removing: /var/run/dpdk/spdk_pid117975 00:47:59.551 Removing: /var/run/dpdk/spdk_pid118049 00:47:59.551 Removing: /var/run/dpdk/spdk_pid119325 00:47:59.551 Removing: /var/run/dpdk/spdk_pid119557 00:47:59.551 Removing: /var/run/dpdk/spdk_pid119779 00:47:59.551 Removing: /var/run/dpdk/spdk_pid119919 00:47:59.551 Removing: /var/run/dpdk/spdk_pid120080 00:47:59.551 Removing: /var/run/dpdk/spdk_pid120163 00:47:59.551 Removing: /var/run/dpdk/spdk_pid120201 00:47:59.551 Removing: /var/run/dpdk/spdk_pid120239 00:47:59.551 Removing: /var/run/dpdk/spdk_pid120716 00:47:59.551 Removing: /var/run/dpdk/spdk_pid120813 00:47:59.551 Removing: /var/run/dpdk/spdk_pid120933 00:47:59.551 Removing: /var/run/dpdk/spdk_pid120998 00:47:59.551 Removing: /var/run/dpdk/spdk_pid122331 00:47:59.551 Removing: /var/run/dpdk/spdk_pid122700 00:47:59.551 Removing: /var/run/dpdk/spdk_pid122892 00:47:59.551 Removing: /var/run/dpdk/spdk_pid123833 00:47:59.551 Removing: /var/run/dpdk/spdk_pid124202 00:47:59.551 Removing: /var/run/dpdk/spdk_pid124405 00:47:59.551 Removing: /var/run/dpdk/spdk_pid125369 00:47:59.552 Removing: /var/run/dpdk/spdk_pid125901 00:47:59.552 Removing: /var/run/dpdk/spdk_pid126087 00:47:59.552 Removing: /var/run/dpdk/spdk_pid128189 00:47:59.552 Removing: /var/run/dpdk/spdk_pid128669 00:47:59.552 Removing: /var/run/dpdk/spdk_pid128876 00:47:59.552 Removing: /var/run/dpdk/spdk_pid130994 00:47:59.552 Removing: /var/run/dpdk/spdk_pid131482 00:47:59.552 Removing: /var/run/dpdk/spdk_pid131684 00:47:59.552 Removing: /var/run/dpdk/spdk_pid133808 00:47:59.552 Removing: /var/run/dpdk/spdk_pid134538 00:47:59.552 Removing: /var/run/dpdk/spdk_pid134740 00:47:59.552 Removing: /var/run/dpdk/spdk_pid137102 00:47:59.552 Removing: /var/run/dpdk/spdk_pid137645 00:47:59.552 Removing: /var/run/dpdk/spdk_pid137854 00:47:59.552 Removing: /var/run/dpdk/spdk_pid140214 00:47:59.552 Removing: /var/run/dpdk/spdk_pid140759 00:47:59.552 Removing: /var/run/dpdk/spdk_pid140963 00:47:59.552 Removing: /var/run/dpdk/spdk_pid143348 00:47:59.552 Removing: /var/run/dpdk/spdk_pid144198 00:47:59.552 Removing: /var/run/dpdk/spdk_pid144408 00:47:59.552 Removing: /var/run/dpdk/spdk_pid144619 00:47:59.552 Removing: /var/run/dpdk/spdk_pid145156 00:47:59.552 Removing: /var/run/dpdk/spdk_pid146092 00:47:59.552 Removing: /var/run/dpdk/spdk_pid146563 00:47:59.552 Removing: /var/run/dpdk/spdk_pid147409 00:47:59.552 Removing: /var/run/dpdk/spdk_pid147963 00:47:59.552 Removing: /var/run/dpdk/spdk_pid148980 00:47:59.552 Removing: /var/run/dpdk/spdk_pid149505 00:47:59.552 Removing: /var/run/dpdk/spdk_pid152306 00:47:59.552 Removing: /var/run/dpdk/spdk_pid153048 00:47:59.552 Removing: /var/run/dpdk/spdk_pid153591 00:47:59.552 Removing: /var/run/dpdk/spdk_pid156670 00:47:59.552 Removing: /var/run/dpdk/spdk_pid157499 00:47:59.552 Removing: /var/run/dpdk/spdk_pid158128 00:47:59.552 Removing: /var/run/dpdk/spdk_pid159495 00:47:59.552 Removing: /var/run/dpdk/spdk_pid160010 00:47:59.552 Removing: /var/run/dpdk/spdk_pid161248 00:47:59.552 Removing: /var/run/dpdk/spdk_pid161761 00:47:59.552 Removing: /var/run/dpdk/spdk_pid163000 00:47:59.552 Removing: /var/run/dpdk/spdk_pid163516 00:47:59.811 Removing: /var/run/dpdk/spdk_pid164341 00:47:59.811 Removing: /var/run/dpdk/spdk_pid164403 00:47:59.811 Removing: /var/run/dpdk/spdk_pid164460 00:47:59.811 Removing: /var/run/dpdk/spdk_pid164519 00:47:59.811 Removing: /var/run/dpdk/spdk_pid164657 00:47:59.811 Removing: /var/run/dpdk/spdk_pid164812 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165040 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165335 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165359 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165410 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165448 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165476 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165515 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165547 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165575 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165613 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165641 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165676 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165708 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165742 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165770 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165809 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165840 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165869 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165903 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165930 00:47:59.811 Removing: /var/run/dpdk/spdk_pid165963 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166016 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166049 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166095 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166174 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166234 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166262 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166314 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166342 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166362 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166434 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166464 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166510 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166542 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166567 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166590 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166620 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166651 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166676 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166702 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166750 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166804 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166833 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166884 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166912 00:47:59.811 Removing: /var/run/dpdk/spdk_pid166934 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167004 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167031 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167082 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167101 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167128 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167155 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167183 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167208 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167236 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167261 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167371 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167462 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167622 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167657 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167716 00:47:59.811 Removing: /var/run/dpdk/spdk_pid167776 00:48:00.070 Removing: /var/run/dpdk/spdk_pid167821 00:48:00.070 Removing: /var/run/dpdk/spdk_pid167855 00:48:00.070 Removing: /var/run/dpdk/spdk_pid167891 00:48:00.070 Removing: /var/run/dpdk/spdk_pid167935 00:48:00.070 Removing: /var/run/dpdk/spdk_pid167970 00:48:00.070 Removing: /var/run/dpdk/spdk_pid168067 00:48:00.070 Removing: /var/run/dpdk/spdk_pid168128 00:48:00.070 Removing: /var/run/dpdk/spdk_pid168185 00:48:00.070 Removing: /var/run/dpdk/spdk_pid168464 00:48:00.070 Removing: /var/run/dpdk/spdk_pid168599 00:48:00.070 Removing: /var/run/dpdk/spdk_pid168649 00:48:00.070 Removing: /var/run/dpdk/spdk_pid168745 00:48:00.070 Removing: /var/run/dpdk/spdk_pid168836 00:48:00.070 Removing: /var/run/dpdk/spdk_pid168881 00:48:00.070 Removing: /var/run/dpdk/spdk_pid169139 00:48:00.070 Removing: /var/run/dpdk/spdk_pid169242 00:48:00.070 Removing: /var/run/dpdk/spdk_pid169345 00:48:00.070 Removing: /var/run/dpdk/spdk_pid169409 00:48:00.070 Removing: /var/run/dpdk/spdk_pid169438 00:48:00.070 Removing: /var/run/dpdk/spdk_pid169525 00:48:00.070 Removing: /var/run/dpdk/spdk_pid169972 00:48:00.070 Removing: /var/run/dpdk/spdk_pid170017 00:48:00.070 Removing: /var/run/dpdk/spdk_pid170333 00:48:00.070 Removing: /var/run/dpdk/spdk_pid170437 00:48:00.070 Removing: /var/run/dpdk/spdk_pid170548 00:48:00.070 Removing: /var/run/dpdk/spdk_pid170605 00:48:00.070 Removing: /var/run/dpdk/spdk_pid170643 00:48:00.070 Removing: /var/run/dpdk/spdk_pid170673 00:48:00.070 Removing: /var/run/dpdk/spdk_pid172025 00:48:00.070 Removing: /var/run/dpdk/spdk_pid172170 00:48:00.070 Removing: /var/run/dpdk/spdk_pid172174 00:48:00.070 Removing: /var/run/dpdk/spdk_pid172191 00:48:00.070 Removing: /var/run/dpdk/spdk_pid172692 00:48:00.070 Removing: /var/run/dpdk/spdk_pid172797 00:48:00.070 Removing: /var/run/dpdk/spdk_pid173724 00:48:00.070 Removing: /var/run/dpdk/spdk_pid174597 00:48:00.070 Removing: /var/run/dpdk/spdk_pid174671 00:48:00.070 Removing: /var/run/dpdk/spdk_pid174722 00:48:00.070 Removing: /var/run/dpdk/spdk_pid175020 00:48:00.070 Removing: /var/run/dpdk/spdk_pid175203 00:48:00.070 Removing: /var/run/dpdk/spdk_pid175319 00:48:00.070 Removing: /var/run/dpdk/spdk_pid175421 00:48:00.070 Removing: /var/run/dpdk/spdk_pid175485 00:48:00.070 Removing: /var/run/dpdk/spdk_pid175523 00:48:00.070 Clean 00:48:00.070 01:14:22 -- common/autotest_common.sh@1449 -- # return 0 00:48:00.070 01:14:22 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:48:00.070 01:14:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:00.070 01:14:22 -- common/autotest_common.sh@10 -- # set +x 00:48:00.329 01:14:22 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:48:00.329 01:14:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:00.329 01:14:22 -- common/autotest_common.sh@10 -- # set +x 00:48:00.329 01:14:22 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:00.329 01:14:22 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:48:00.329 01:14:22 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:48:00.329 01:14:22 -- spdk/autotest.sh@391 -- # hash lcov 00:48:00.329 01:14:22 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:48:00.329 01:14:22 -- spdk/autotest.sh@393 -- # hostname 00:48:00.329 01:14:22 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:48:00.588 geninfo: WARNING: invalid characters removed from testname! 00:48:47.267 01:15:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:47.267 01:15:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:50.548 01:15:12 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:53.076 01:15:15 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:55.629 01:15:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:58.914 01:15:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:01.448 01:15:23 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:49:01.448 01:15:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:01.448 01:15:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:49:01.448 01:15:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:01.448 01:15:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:01.448 01:15:23 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:01.448 01:15:23 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:01.448 01:15:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:01.448 01:15:23 -- paths/export.sh@5 -- $ export PATH 00:49:01.448 01:15:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:01.448 01:15:23 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:49:01.448 01:15:23 -- common/autobuild_common.sh@447 -- $ date +%s 00:49:01.448 01:15:23 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721870123.XXXXXX 00:49:01.448 01:15:23 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721870123.BieiUA 00:49:01.448 01:15:23 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:49:01.448 01:15:23 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:49:01.448 01:15:23 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:49:01.448 01:15:23 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:49:01.448 01:15:23 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:49:01.448 01:15:23 -- common/autobuild_common.sh@463 -- $ get_config_params 00:49:01.448 01:15:23 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:49:01.448 01:15:23 -- common/autotest_common.sh@10 -- $ set +x 00:49:01.448 01:15:23 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:49:01.448 01:15:23 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:49:01.448 01:15:23 -- pm/common@17 -- $ local monitor 00:49:01.448 01:15:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:01.448 01:15:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:01.448 01:15:23 -- pm/common@25 -- $ sleep 1 00:49:01.448 01:15:23 -- pm/common@21 -- $ date +%s 00:49:01.448 01:15:23 -- pm/common@21 -- $ date +%s 00:49:01.448 01:15:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721870123 00:49:01.448 01:15:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721870123 00:49:01.448 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721870123_collect-vmstat.pm.log 00:49:01.448 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721870123_collect-cpu-load.pm.log 00:49:02.385 01:15:24 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:49:02.385 01:15:24 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:49:02.385 01:15:24 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:49:02.385 01:15:24 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:49:02.385 01:15:24 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:49:02.385 01:15:24 -- spdk/autopackage.sh@19 -- $ timing_finish 00:49:02.385 01:15:24 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:02.385 01:15:24 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:49:02.385 01:15:24 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:49:02.644 01:15:25 -- spdk/autopackage.sh@20 -- $ exit 0 00:49:02.644 01:15:25 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:49:02.644 01:15:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:49:02.644 01:15:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:49:02.644 01:15:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:02.644 01:15:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:49:02.644 01:15:25 -- pm/common@44 -- $ pid=177071 00:49:02.644 01:15:25 -- pm/common@50 -- $ kill -TERM 177071 00:49:02.644 01:15:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:02.644 01:15:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:49:02.644 01:15:25 -- pm/common@44 -- $ pid=177072 00:49:02.644 01:15:25 -- pm/common@50 -- $ kill -TERM 177072 00:49:02.644 + [[ -n 2152 ]] 00:49:02.644 + sudo kill 2152 00:49:02.654 [Pipeline] } 00:49:02.672 [Pipeline] // timeout 00:49:02.678 [Pipeline] } 00:49:02.694 [Pipeline] // stage 00:49:02.700 [Pipeline] } 00:49:02.716 [Pipeline] // catchError 00:49:02.726 [Pipeline] stage 00:49:02.729 [Pipeline] { (Stop VM) 00:49:02.742 [Pipeline] sh 00:49:03.028 + vagrant halt 00:49:06.322 ==> default: Halting domain... 00:49:16.306 [Pipeline] sh 00:49:16.585 + vagrant destroy -f 00:49:20.774 ==> default: Removing domain... 00:49:20.786 [Pipeline] sh 00:49:21.065 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest_3/output 00:49:21.072 [Pipeline] } 00:49:21.083 [Pipeline] // stage 00:49:21.086 [Pipeline] } 00:49:21.095 [Pipeline] // dir 00:49:21.098 [Pipeline] } 00:49:21.111 [Pipeline] // wrap 00:49:21.115 [Pipeline] } 00:49:21.127 [Pipeline] // catchError 00:49:21.134 [Pipeline] stage 00:49:21.135 [Pipeline] { (Epilogue) 00:49:21.146 [Pipeline] sh 00:49:21.428 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:43.375 [Pipeline] catchError 00:49:43.377 [Pipeline] { 00:49:43.390 [Pipeline] sh 00:49:43.670 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:43.929 Artifacts sizes are good 00:49:43.937 [Pipeline] } 00:49:43.952 [Pipeline] // catchError 00:49:43.962 [Pipeline] archiveArtifacts 00:49:43.969 Archiving artifacts 00:49:44.461 [Pipeline] cleanWs 00:49:44.471 [WS-CLEANUP] Deleting project workspace... 00:49:44.471 [WS-CLEANUP] Deferred wipeout is used... 00:49:44.478 [WS-CLEANUP] done 00:49:44.479 [Pipeline] } 00:49:44.496 [Pipeline] // stage 00:49:44.501 [Pipeline] } 00:49:44.516 [Pipeline] // node 00:49:44.522 [Pipeline] End of Pipeline 00:49:44.555 Finished: SUCCESS